Should AI Research Try to Model the Human Brain?


Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. AI research has come a long-long way in the
last few years. I remember that not so long ago, we were lucky
if we could train a neural network to understand traffic signs, and since then, so many things
happened: by harnessing the power of learning algorithms, we are now able to impersonate
other people by using a consumer camera, generate high-quality virtual human faces for people
that don’t exist, or pretend to be able to dance as a pro dancer by using an external
video footage and transferring it onto ourselves. Even though we are progressing at a staggering
pace, there is a lot of debate as to which research direction is the most promising going
forward. Roughly speaking, there are two schools of
thought. One, we recently talked about Richard Sutton’s
amazing article by the name, “The Bitter Lesson”, in which he makes a great argument
that AI research should not try to mimic the way the human brain works – he argues that
instead, all we need to do is formulate our problems in a general manner, so that our
learning algorithms may find something that is potentially much better suited for a problem
than our brain is. I put a link to this video in the description
if you’re interested. And two, a different school of thought says
that we should a good look at all these learning algorithms that use a lot of powerful hardware
and can do wondrous things, like playing a bunch of Atari games at a superhuman level. Note that they learn orders of magnitude slower
than the human brain does, so it should definitely be worth it to try to study and model the
human brain, at least until we can match it in terms of efficiency. This school of thought is what we are going
to talk about in this video. As an example, let’s take a look at deep
reinforcement learning in the context of playing computer games. This technique is a combination of a neural
network that processes the visual data that we see on the screen, and a reinforcement
learner that comes up with the gameplay-related decisions. Absolutely amazing algorithm, a true breakthrough
in AI research. Very powerful, however, also quite slow. And by slow, I mean that we can sit for an
hour in front of our computer and wonder why our learner does not work at all, because
it loses all of its lives almost immediately. If we remain patient, we find out that it
works, it just learns at a glacial pace. So, why is this so slow? Well, two reasons. Reason number one is that the learning happens
through incremental parameter adjustment. What does that mean? If a human fails really badly at a task, the
human would know that a drastic adjustment to the strategy is necessary, while the deep
reinforcement learner would start applying tiny, tiny changes to its behavior and test
again if things got better. This takes a while, and as a result, this
seems unlikely to have a close relation to how we, humans think. The second reason for it being slow is the
presence of weak inductive bias. This means that the learner does not contain
any information about the problem we have at hand, or in other words, has never seen
the game we’re playing before and has no other previous knowledge about games at all. This is desirable in some cases, because we
can reuse one learning algorithm for a variety of problems. However, because this way, the AI has to test
a stupendously large number of potential hypotheses about the game, we will have to pay for this
convenience by a mighty inefficient algorithm. But is this really all true? Does deep reinforcement learning really have
to be so slow? And what on earth does this have to do with
our brain? Well, this paper proposes an interesting counterargument
that this is not necessarily true and argues that with two well thought out changes, the
efficiency of deep reinforcement learning may be drastically improved, and get this,
it also tells us that these changes are also possibly based in neuroscience. So what are the two changes? One is using episodic memory, which stores
previous experiences to help estimating the potential value of different actions, and
this way, drastic parameter adjustments become a possibility. And it not only improves the efficiency, but
there is more to it, because there are recent studies that show that using episodic memory
indeed contributes to the learning of real humans and animals alike. And two, it is beneficial to let the AI implement
its own reinforcement learning algorithm, a concept often referred to as “learning
to learn” or meta reinforcement learning. This also helps obtaining more general knowledge
that can be reused across tasks, further improving the efficiency of the agent. Here you see a picture of an fMRI, and some
regions are marked with yellow and orange here. What could these possibly mean? Well, hold on to your papers, because these
highlight neural structures that implement a very similar meta reinforcement learning
scheme within the human brain. It turns out that meta reinforcement learning,
or this “learning to learn” scheme may not just be something that speeds up our AI
algorithms, but may be a fundamental principle of the human brain as well. So these two changes to deep reinforcement
learning not only drastically improve its efficiency, but it also suddenly maps quite
a bit better to our brain. How cool is that? So, which school of thought are you most fond
of? Should we model the brain, or should we listen
to Richard Sutton’s Bitter Lesson? Let me know in the comments. Also, make sure to have a look at the paper,
I found it to be quite readable, and you really don’t need to be a neuroscientist to enjoy
it and learn quite a few new things. Make sure to have a look at it in the video
description! Now, I think you noticed that this paper doesn’t
contain the usual visual fireworks, and is more complex than your average Two Minute
Papers video, and hence, I expect it to get significantly fewer views. That’s not a great business model, but no
matter, I made this channel so I can share with you all these important lessons that
I learned during my journey. This has been a true privilege and I am thrilled
that I am still able to talk about all these amazing papers without worrying too much whether
any of these videos will go viral or not. This has only been possible because of your
unwavering support on Patreon.com/TwoMinutePapers. If you feel like chipping in, just click the
Patreon link in the video description. If you are more like a crypto person, we also
support cryptocurrencies like Bitcoin, Ethereum and Litecoin, the addresses are also available
in the description. Thanks for watching and for your generous
support, and I’ll see you next time!

100 thoughts on “Should AI Research Try to Model the Human Brain?”

  1. I suspec even if principles from neuroscience are not optimal for AI we will still end up implementing them in our best AI. Simply for the fact that it is humans who are building AI.

  2. In my opinion, we shouldn't do it… however, I also believe that it is inevitable so therefore we might as well so that we understand the results before a rogue nation attempts doing something like this and weaponizing it.

  3. if a.i. will be built in a way that human brain is, it will be same dumb.
    somebody should build it. to test this hypothesis.

  4. I think drawing inspiration from the human brain is just fine, but in no way should a direct model be attempted. Not just for moral reasons, but mostly because modelling a naive human model of the brain is already resource intensive. It's best to tailor the AI solution to the machine.

  5. the only fact from the bitter lesson seems to be that a more powerful machine in the future may beat the context based methods we have now.
    it is still worth making these contextual algorithms (i.e. putting human knowledge into agents) to test what works and how.
    once a sufficiently large machine is available, implementing the methods of learning we have discovered will allow the new 'context-less' system to leverage the methods to its advantage.

    if we were to take the bitter lesson as a golden rule, every AI researcher should stop what they are doing and move into logic electronics. that seems like a real waste of opportunity considering how many tasks we CAN solve using the 'worse' method.

  6. I think we need to model the brain, but at this point we're modeling small parts of the brain. It's like we're modeling the object recognition part, but not the emotion part, the language part, the visual part, the short term memory part or the long term memory part (or a bunch of other parts).

    It may be that we need to add whatever it is that sleep does. We know that all brains stop learning and then stop remembering if you don't let them sleep. Yet no one knows what's happening in sleep.

  7. We should try to copy the function of the brain…. but not necessarily do it the way our brain does it. Mind you, I think we can do better than learning from millions of examples. We need a pretrained model and then when it estimates if something is "new" and if so, it should turn the learning rate up so it can learn new things quicker.

  8. Modeling the human brain without a human body, without a human environment, and without the human goal? Well, why not. Henry Markram with his 350 neuroscientists and a one billion funding will do it, he just needs another 5000 scientists, $570 billion additional money and 40 years of time. 😉

  9. The tasks we ask our models to complete are already brain specific, such as translating, object recognition etc. the closer it is to human brain, the better its performance.

  10. isn't Richard Sutton also suggesting meta- learning as well? He calls it meta methods but the essence seems to be the same. Which means the only contention about that article you are trying to make is that leveraging computation will have increasingly more benefits in the long run which is true only if you include meta learning. SO essentially you have presented two similar views of the same point and they are not conflicting. Thank you for the video!

  11. I can't even imagine how this will change content creation in the future, especially when these applications are available to the general public. Just imagine a world of content creators, wherein the actions they commit, the places they go and even their own likeness is all a contrivance of some nearly realized neural network based artificer.

  12. Obviously we should keep modeling the human brain, because it is what we have and we need to better understand it.

    Only then can then have an AI find a better replacement for it. Maybe even one that can potentially keep up with the kind of AI it takes to find one.

  13. My professor at my university said this a while back, "at some point, AI researchers will split into two directions: one of them will try to make learning algorithms inspired by nature, and another who will make algorithms that scale with hardware and data."

  14. This meta-learning approach is a double-edged sword – It might provide the breakthrough we're looking for, but it will inevitably create undesirable consequences, as some dodgy behaviors are actually convergent instrumental goals (Tactics such as free-riding, deception, deceit, and other problematic behaviors have evolved many times over in any scale, from bacteria to chameleons to governments and firms. Those are rational, computationally-easier tactics to learn and deploy), and without extremely realistic simulations, there is no way to predict the almost infinite amount of harmful yet humanly undetectable behaviors that a meta-learning AI might teach itself in order to acheive its goals

  15. We should be modeling only the important parts, of course. The problem is in determining which parts and mechanisms our brains are using are the important ones.

  16. The ideas are not opposite or contrasting at all. What Sutton said remains true, intelligence will emerge and complexity will arise naturally.

  17. Thinking the end result will be something like the discovery of heavier than air flight.
    Trying to mimic a birds flight is very difficult when you don't know the underlying aerodynamics which causes flight, thus trying to achieve flight by understanding how a wing works (like the Wright brothers did) let us achieve flight. Then using that knowledge we could understand how birds fly. My guess is that something similar will happen here too.

  18. “Hold on to your papers” hahahahhahahahah you get a like just for that, but I love the content anyway keep it up!

  19. I think that in the end, the best computation strategies will be dominated by computing power and non-human-style algorithms. But until we get there, copying/modeling the human brain continues to give us good insights and new innovations.

  20. Huh, and I thought my idea of making a network train itself (or rather, have a sibling network that adjusted the other one's weights) was at least somewhat unique, I'd love to see some papers on that subject. Right now I just have a prototype javascript toy visualization.

  21. The statement "(…) humans and animals alike" is very unscientific. Humans are animals as well. The statement should be "(…) human and OTHER animals alike".

  22. I love how we ask these philosophical questions about AI. Someone will try use it for just about everything whether we think its ethical or not.

  23. Personally, I think we should be modeling the human brain (to inform us of good learning principles) and psyche (to inform us of good scheduling strategies), and evolution (to inform us of how and where to implement stochasticity), as well as using the models that incredible mathematicians and statisticians have discovered 'simply works best' (for the parts that currents require to be hard coded).

  24. I don't think the two actually contradict each other. If I understand correctly, The Bitter Lesson only says that we shouldn't try to program in domain specific knowledge by hand, while this new approach is about sprinkling on a little general knowledge. Also, this doesn't have to be hand crafted either, I think if an AI is powerful enough, we could simply tell it to learn learning efficiently. Essentially it's task would be to design an AI that can learn more efficiently, and when it makes some progress, it can switch itself to the new design.

  25. In the past we were used to a single solution (the microprocessor) for all the computation needs. Now we use video cards for some needs (more than one), TPUs for others, quantum computers for others (they will not replace normal computers by the way, just be used in specific use cases).
    The same thing with learning models: we will not have a single solution for all the problems, but a complete toolbox. We need to learn which kind of problems can be better solved with each tool.
    And maybe even study how to make communication between these models, how to integrate them, how to transfer learning between them. Maybe even between them and our brains.

  26. We should model the brain to try and see if we can make an AI that can reason with us.
    Nevertheless, parallelism is kind, so try both approaches in different scenarios.

  27. see the case of the ants for example, which splits the spots "by trial and error" between it and exchanged information, more small brains (ants / exponential multiplier) than a single fat human (limited by its power alone) that he can jump scales and skew, and is able to shortcut (drastic adjustment of parameters *) ?

  28. I go with bitter lesson philosophy. AI's slowness to learn isn't just that it doesn't "know about games"… in fact it knows nothing about anything. Even a newborn human has vastly more knowledge and capability because its seemingly empty brain is actually highly primed by genetics (evolutionary information). IMO we'll shift to a paradigm of pre-trained NN's that have basic common sense and knowledge. These will be the primers for new and more advanced algorithms

  29. Both. I think if you're trying to solve one problem really really well then Richard's approach makes sense. But if we're trying to make general intelligence then we should map the brain.

  30. In VSauce's Mind Field S03E01, you get a really strong reminder of what gives us humans the amazing capabilities we have: we compromise. Instead of wasting a bunch of resources making sure we don't miss anything, we use that processing power on other processes that might be more useful. That leads to us being capable of doing mistakes, but retrying is (almost always) a possibility. I think the brain is a great example of how we can get efficient results with low processing power, and if we ever try to create cybernetic life it would be good to know what not to "repeat".
    Another thing the human brain does that we seem to not want to do with AI, is utilise past networks that were good at their job to facilitate future learning experiences. I expect making it easy to attach and detach networks from eachother in some manner will be essential in the near future

  31. They can both be true, just with different tradeoffs. My suspicion is that all cognitive biases serve useful purposes — /probably/ that they reduce the number of examples required to get "good enough" — and the only way to eliminate those biases and all the problems they cause is to learn very slowly. Cheap computing can in some cases make up for learning slowly, the exceptions being cases where the dataset is limited.

  32. I just feel the need to thank you. I'm lucky enough to be employed as a machine learning engineer and researcher, and I'm finally starting to break into academia in a very niche area of the intersection between group theory and machine learning. Your videos, however, are one of the best ways I can keep up on important developments in the machine learning landscape. The field is so vast with so many publications that it's impossible to truly read and understand them all while also working on your own stuff. So, for that, I am eternally grateful. It's people like you who make this dream career of mine even possible.

  33. Currently ai is basically just modeling Fantasy. I have met a couple people that can real world style transfer and a lot of other things. What I found far more interesting is that yesterday I found out how to do peer reviewable and testable telepathy. I just don't know how to get this news out 😅

  34. We should model our brain to get efficiency, and Sutton's "model" to increase the scope of our processing algorithms

  35. What if there were used 2 algorithms, one for the initial learning to grasp the concepts involved in the game, and then a second one for mastering it? In the end, thats how people work, its easy to learn to do something new, but pretty hard to master it.
    Based on the video, looks like its trying to master the game (2nd algorithm) since the beginning.

  36. IMHO, we shouldn't limit ourself and understand how human brain works and develop Sutton's "models".
    Because brain could bring us additional knowledge, like it was with CNN.

  37. Should? how is that even a question? Someone is going to do it no matter what so we will find out if it is the more fruitful way for AI or not. We would have to simulate the physics of an entire biological system so it really just seems like a waste of time since we cannot even simulate a single hydrogen atom without it taking months to calculate. If you are not modelling every atom then you are not really modelling the human brain at all so then you are talking about something completely different and should not call it "modeling the human brain".

  38. As a neuroscience student I’d agree with a few other comments that those two ideas do not contradict. Implementing “episodic memory” is rather a concept of maintaining more information in the AI than modeling them like the brain – we know some but close to nothing about how the brain is wired to remember.

  39. Hey Guys, I just stumbled about this fantastic channel. I really love the concept of giving a short overview about interesting papers. Is there something similar like a written blog or website where papers are presented in a textual form (with summaries, keypoints)? Like a community platform as youtube just for research papers (maybe similar to research gate)?

  40. we'll never have a hal (or star trek) like ai if we don't model the human brain! onward at full speed!

  41. In layman terms… how is episodic memory different from the existing experience replay / memory in reinforcement learning?

  42. It is quite clear to me that both models cannot be seperated. You can combine the best of both in one.
    Efficiency is needed on the start. So take the human brain approach. But your algorythms, like a good human brain, should never stop to learn or think/try out something far far out of the box. This is how you can still perfectionate the system with maximum efficiency.

  43. I think if we model after the human brain it'll be optimizing for efficiency. It's structured as it is after millions of year of evolution, and arrived at this structure for maximum efficiency. Less efficient brains die out since it can't get enough food. A general AI will likely follow human brain like structure

    However, non-brain based structure could potentially optimize for performance but will never be as efficient, but it's okay if we have a high supply of power.
    And probably more likely to end up enslaving humans as batteries 🙂

  44. Why not explore both methodologies? Personally I think we should try to model biology to gain a better understanding of intelligence, then augment these biological models once we are able to achieve human-level performance on non-trivial tasks

  45. Don't worry about a good video getting less views. If it's a good paper the views will come on their own. Don't worry about the pizzazz, I'm here for the content of the papers themselves.

  46. You only need 200 petaFLOPS of computing power to simulate a brain. There is already a supercomputer in China that can do rhis type of power.

  47. Maybe in Javeed's lifetime a door could be opened up into Zendegi-ye-Behtar; maybe his generation would be the first to live without the old kind of death. Whether or not that proved to be possible, it was a noble aspiration. But to squeeze some abridged, mutilated person through the first available aperture was not.

    If you want to make it human, make it whole.

    — Greg Egan

  48. The bitter lesson's model and the brain model are not mutually exclusive.
    In other words, trying to model things in a general way might well end up with somehow replicating brains functions.

  49. If you buy a server from one of the shops youre paying 25% over retail on all parts at least. WHat is the benefit of having them put it together for you?Labor set up is like 2 hours? Where am I wrong here?

  50. I mean, i imagine modelling the brain works for some situations and using a different system works for others, I don't think sutton is saying that modelling the brain is automatically wrong, just pointless unless it has quantifiable improvements. But why would we keep calling it "artificial intelligence" if we are not modelling a brain of some sort? "artificial intelligence" and learning algorithms aren't the same pursuit, learning is a generalised system while artificial intelligence implies an attempt to recreate human or animal intelligence in a computer.

  51. Its like that sand simulation video. With smart optimisations we can probably get closer to human learning rate and still keep the rigidity to minimum.

  52. Model the brain to increase intelligence efficiency but the brain is not as a smart as deep learning engineers make it seem to be model the brain also do other techniques

  53. Another brain-inspired learning architecture improvement in many areas is attention-based learning. I can't wait to see how well this works for music generation.

  54. ⢀⡴⠑⡄⠀⠀⠀⠀⠀⠀⠀⣀⣀⣤⣤⣤⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    ⠸⡇⠀⠿⡀⠀⠀⠀⣀⡴⢿⣿⣿⣿⣿⣿⣿⣿⣷⣦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    ⠀⠀⠀⠀⠑⢄⣠⠾⠁⣀⣄⡈⠙⣿⣿⣿⣿⣿⣿⣿⣿⣆⠀⠀⠀⠀⠀⠀⠀⠀

    ⠀⠀⠀⠀⢀⡀⠁⠀⠀⠈⠙⠛⠂⠈⣿⣿⣿⣿⣿⠿⡿⢿⣆⠀⠀⠀⠀⠀⠀⠀

    ⠀⠀⠀⢀⡾⣁⣀⠀⠴⠂⠙⣗⡀⠀⢻⣿⣿⠭⢤⣴⣦⣤⣹⠀⠀⠀⢀⢴⣶⣆

    ⠀⠀⢀⣾⣿⣿⣿⣷⣮⣽⣾⣿⣥⣴⣿⣿⡿⢂⠔⢚⡿⢿⣿⣦⣴⣾⠁⠸⣼⡿

    ⠀⢀⡞⠁⠙⠻⠿⠟⠉⠀⠛⢹⣿⣿⣿⣿⣿⣌⢤⣼⣿⣾⣿⡟⠉⠀⠀⠀⠀⠀

    ⠀⣾⣷⣶⠇⠀⠀⣤⣄⣀⡀⠈⠻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀

    ⠀⠉⠈⠉⠀⠀⢦⡈⢻⣿⣿⣿⣶⣶⣶⣶⣤⣽⡹⣿⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀

    ⠀⠀⠀⠀⠀⠀⠀⠉⠲⣽⡻⢿⣿⣿⣿⣿⣿⣿⣷⣜⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀

    ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣷⣶⣮⣭⣽⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⠀⠀⠀

    ⠀⠀⠀⠀⠀⠀⣀⣀⣈⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠇⠀⠀⠀⠀⠀⠀⠀

    ⠀⠀⠀⠀⠀⠀⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀

    ⠀⠀⠀⠀⠀⠀⠀⠹⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠟⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀

    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠛⠻⠿⠿⠿⠿⠛⠉

  55. Brain modeling has billions of years behind it. It is ALWAYS the correct answer to model it. Sutton’s warning was about building models based on how we “think” about a problem. But mimicking the fundamental structure of the brain will always produce better results.

  56. We need to go towards human brain, because we need to be able to make AI that can make good decisions using intuition and without practice. That's because you can't simulate everything accurately, and can't have an AI controlled system wreaking havoc in real life for hundreds of years before it gets good.
    The thing with human brain is, it's filled with life experience, and because things in the world work in similar principles, you can learn and understand the principles of breakout quickly. However, an AI is equivalent to a baby, since it doesn't have any experience at all in the world, it lacks wisdom, which is knowledge of principles in the world.
    To make a general human-like AI, I propose that you should put a lot of 360 degree cameras and microphones all around in the world, to gather experience to learn how things work. That way, it could understand humans and man-made games intuitively based on all the data patterns of all aspects of life. I'm pretty sure that is required to make an intuitively thinking, quickly learning, pattern recognizing AI.
    How to practically implement this, I previously thought that you would literally need to have a robot that experiences a life progression of a human that participates in activities, but thinking more, I'm thinking that physical participation might not be required, and that it would be only required to learn to do tasks, but not necessarily to learn how the world works, to connect the dots. This way, you would only need to plant a large amount of camera-microphone units to different locations to gather data, however, you would also need to include all kind of private places and situations, otherwise the AI wouldn't learn all sides of life, and this would be difficult to implement morally.

  57. I just thought about this: If you combine two AI's with different professions, could we learn ourselves how AI should respond and adept to random scenarios by looking at their behavior?

  58. I've believed for a long time that Kurzweil's idea of emulating a human brain is a wrong approach. However, a lot can be learned from it that can be applied to improve AI.

  59. I mean… Human brains learn glacially slow too… Have any of you deep in the AI trenches ever seen a goddamn baby? Fuckers take ages to learn literally anything.

  60. AI learning has trouble with random seeds in video games. This is because some AIs rely on repetition and brute forcing with many lives to get anything done. I've played against my share of Starcraft 2 AIs and they are very boring and repetative, and often use exploits like minerals, armor/hp, build time, APM, and damage hacks to get ahead and look like they are doing better than they are.

  61. Why should it be called richard suttons bitter lesson?? can't we just call it sometimes it is better to do it one way, sometimes not.

  62. The content of this video is great! Some feedback though: the backdrop was highly distracting. if you're playing something unrelated in the background, it would be better to remove all texts and only add images. If you put both texts and images, my brain just starts to try to undestand the concepts presented, dividing my attention in two – which of course results in much less than half the attention for either of the two narratives. Just images would be better.

Leave a Reply

Your email address will not be published. Required fields are marked *