Zhixuan Lin

Zhixuan Lin

Github CV Email

zxlin.cs [AT] gmail.com

About Me

I recently graduated from Zhejiang University. I will join Mila as a master student next year.

My life goal is to understand mind/consciousness/intelligence and to build them. I believe the important thing is to keep learning and exploring, so I won't limit my research to some particular topics until I have found an approach that I truly believe in.

Currently, I'm interested in two major lines of topics:

  • The RL line: exactly the RL problem, where the the reward signals are given, and the goal is to maximize the reward. Huge successes are seen: AlphaGo (less pure) and its successors.

  • The consciousness line: the goal is to understand what thinking and consciousness mean. This is not well formulated since we don't know what we are optimizing. But even though we don't know what the problem is, we kind of know what the solution will be like: no matter what it is, it must contains some internal sequential process that resembles human thinking.

The RL formulation is beautiful, with three core concepts: temporal, interaction, reward. The most important part is the the specification of rewards. The principle of solution is also clear: estimate the long term consequence of state and actions, and improve the agent based on that. As far as I know, all RL algorithms are based on this rough principle.

I have two issues with this formulation. The first is practical: there are some issues to be solved, such as partial observability (or the agent state problem) and temporal abstraction. The second is intuitive and vague, but maybe more fundamental: both the RL problem and the solutions look too "mechanical", though they are beautiful and make perfect sense to me. More specifically, the thing that makes the RL formulation (in its current form) not so convincing to me is: I cannot imagine what role consciousness and thinking will play in the "final" solution, if that "final" solution exists.

I'm not saying that being mechanical is bad. It is possible that consciousness is not essential to intelligence, and the whole reason that we have it is to do mechanical planning (maybe MCTS). If that is the truth, I will accept it (after seeing enough evidence). But at least currently, I do believe that there is a reason that we are conscious. So in my opinion, for any problem formulation of AI, the "ultimate" agent will be conscious and is able to think.

Now the consciousness line. I'm more interested in this one, actually. First, we need to admit that the problem is not well formulated. But to be clear, we also need to understand that the problem is not to answer "for what problem consciousness and thinking are the optimal solution" (it is a useful problem to consider though). The problem that I'm really interested in is "what consciousness and thinking are", and then "how to build it". This problem is in itself interesting. I don't actually care whether it is optimal in any sense, although I do believe it will be the optimal solution to some problem.

Currently, even though we don't know what objective we are trying to optimize, we have some clues about what the solution will be like: it is multi-step, sequential, with a memory, with attention, and so on. More thinking and experiments are needed.

About Mind/Intelligence

What I am 100% sure:

  • It has to be sequential (stateful, has a memory, etc.). This is not something trivial.

  • It is world-agnostic (from Rich Sutton). Interestingly, consciousness or self-awareness is something world-agnostic.

  • If there is a definition of the world in the formulation, the agent itself must be part of that world.

  • Consciousness is crucial. Actually I am not 100% sure about this. It is more like a belief or intuition.

    • Two most central questions 1) why we are experiencing this subjective experience (what's the advantage of being conscious?) and 2) how this subjective experience is produced from pure physical processes?

    • Some (weak) justifications:

      • We are weak in terms of computational ability. But we are self-conscious. There is no reason why a higher life form will not be self-conscious.

      • Cleverer animals tend to exhibit more consciousness.

      • An agent can be stupid while still being an AGI. Consider a human baby.

      • Evolution brings us here. It must be useful. And through these years we are getting more and more self-conscious.

Facts:

  • We think using languages and images instead of some hidden, internal representations. At least that's what we perceive. This is interesting because this is very inefficient. There must be some reasons.

What might be true:

  • In terms of implementation, it has to be scalable in some way (modular, or built with simple units, like neurons)

  • The definition involves "goals". This is the perspective taken by Rich Sutton. But I still cannot understand this.

  • It should be self-improving.

  • Adversarial learning might be important. Being better than yourself always seems to be something learnable.

    • Think about it. If AlphaZero is trained against a very strong human player, it is not going to improve. This is kind of similar to the exploration problem in RL. This might hint some solution for the exploration problem.

What I don't believe:

  • Pure SGD will work. There must be meta-level learning. SGD can be used in the first stage.

  • Causal learning is important. Instead, it should be something that naturally emerges if we have choose the right paradigm. Actually, I seriously doubt that we should explicitly study this.

Other interesting questions:

  • Does it make sense to say whether a grid-world agent has intelligence or not?

  • Should the process of evolving from a single cell to today's human considered as part of the "general principle of intelligence" or we are only considering the process of how human developed consciousness? Or, the question is, where should we start from?

  • Insects don't have a mind. We do. Some animals do. What's the fundamental criterion?

Thoughts:

  • If you think about it, the agent is not part of the environment. For example, in robotics, everything about the robot (position, etc.) is considered part of the environment/external world. This is very unnatural. I'm not saying that positions should not be considered an external world state. I'm just saying that maybe we should reconsider this problem.

  • State abstraction may be important. In many cases, reaching a subgoal means reaching a "state" that has a high value (e.g., doing welling in an exam, making money), However, these states are not physical world states, but rather abstract states.

  • We think in a so abstract and flexible way that it is impossible that the process of thinking is hard coded in the structure of our brain (or weights in neural networks). By flexible I mean even if the knowledge, or content of our (short-term) memory changes a little bit, what we will be thinking will be completely different. And our short-term memory doesn't just change a little bit, it changes rapidly and drastically. The behavior must be conditioned on the contents/knowledge of the rapid-changing memory, in a very abstract way. However, the mechanism that thinking process is conditioned on memory must also be simple, scalable because this is a basic building block of the thinking process. Combined with the fact that 1) mind is sequential 2) memory is huge (so attention is required for useful read and write operation) 3) the conditioning mechanism must be flexible and simple at the same time, there are reasons to believe the thinking process act in a Turing machine way, or a modern computer CPU way, where a controller acts on an external memory using attention.

    • Learning where to write and where to read and what to write (the transitions) is not different from how we interact with the world. It has to be learned. So RL may help here.

    • Let's think about this. If RL is the way that animals develop their reflexive behavior, then we can image how mind is developed be evolution

      • Single cell animals

      • Animals that learn reflexive behaviors by RL

      • Animals that have a simple, small memory

      • This small memory gets larger, and attention appears in aiding effective reading and writing

      • Animals learn advanced way to read and write memory by RL

      • Then consciousness appears?

Publications

Improving Generative Imagination in Object-Centric World Models

Zhixuan Lin, Yi-Fu Wu, Skand Vishwanath Peri, Bofeng Fu, Jindong Jiang, Sungjin Ahn

ICML 2020

SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition

Zhixuan Lin*, Yi-Fu Wu*, Skand Vishwanath Peri*, Weihao Sun, Gautam Singh, Fei Deng, Jindong Jiang, Sungjin Ahn

ICLR 2020

[Project] [Code][Paper]

GIFT: Learning Transformation Invariant Dense Visual Descriptors via Group CNNs

Yuan Liu, Zehong Shen, Zhixuan Lin, Sida Peng, Hujun Bao, Xiaowei Zhou

NeurIPS 2019

[Project] [Code] [Paper]