(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38905898

以下是根据要求进一步阅读认知科学、神经科学和人工智能的一些建议: 1.“认知科学:概述”可在认知科学子堆栈上找到。 2. Seatrel的在线课程涵盖的主题从人工智能的基本原理到医疗保健和机器人技术的应用。 3. Neural News 报道神经科学研究和应用的进展,包括新发现、临床试验、产品发布、资助机会和职位空缺。 4. Brain to Mind 通过网络研讨会、书籍和课程提供对认知心理学理论、应用技术和实用工具的见解。 5. BiomedWorks 为生物技术、生命科学、医疗设备和数字健康领域的专业人士提供新闻、活动和资源。 关于您关于经典条件反射与 Simd 相当的观点,以下是 Chris Olah 在回应之前的查询时所做的有用比较: “经典条件反射可以让你提前为涉及感官线索的情况准备反应。……条件反射对于准备类似反射的反应很有用,但反应必须保持灵活性,以便对新条件做出最佳反应。” 在计算中,单指令多数据 (SIMD) 是一种通过针对称为数据元素的操作数组并行执行指令来加速常见操作的技术。 另一方面,控制行走的小脑模块对皮层发送给它的命令做出快速反应。 这些与硬件加速器中存在的专用处理器(例如 GPU 或 TPU)相比具有优势,后者允许跨多个计算单元并行加速计算。 所以也许这是一个合理的类比。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
What does the cerebellum do? (sarahconstantin.substack.com)
401 points by scrambled 1 day ago | hide | past | favorite | 180 comments










This is fascinating to me. The list of things the author suggests the cerebellum handles is a tailor-made list of things I'm oddly bad at:

1. I'm very uncoordinated, with a noticeable intentional tremor

2. I'm particularly bad at sequencing dependencies for projects/errands/household tasks. I have to write down even fairly simple sequences of subtasks or get lost in yak-shaving loops

3. When flustered, I make very distinctive disfluencies in speech around conjugations (swapping "but"/"and"/"although") and sequencing (ie, placing objects before subjects/verbs) in sentences as well swapping relationships (referring to someone's parent as their child or vice versa, swapping "you" and "I/me", etc)

4. I tend to have a "ground up" approach to writing (building clauses first and then moving them around to construct sentences), which doesn't resemble the approach of other people I've shoulder-surfed.

All of these are fairly mild in terms of life impact, to be clear (or perhaps I'm just able to satisfactorily compensate for them in various ways) but I wonder if they all share some underlying minor cerebellar dysfunction.



I thought that the list of cognitive impairments sounds like a laundry list of ASD symptoms, and indeed, at least some researchers seem to believe that there's a connection between Autism and cerebellum dysfunction:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3677555/



There are also a number of rare diseases whose mutations affect cortical neurons and have some overlap in their symptoms with ASD.


Often with the brain multiple regions are needed for a single function. For example planing action including movement has much to do with the prefrontal cortex and the basal ganglia is much involved in making smooth motions and inhibiting tremors, etc. Problems with the dopamine system also affect things like planing (ADHD) and tremors (Parkinson’s).


With regards to #2, I'm wondering if anyone has directed you to https://goblin.tools/ ?


I can't help thinking that this is where humanity is heading. Given that the different parts of the brain have to compete for resources and given what the cerebellum does, it makes sense that a lesser developed one can be an advantage: It frees up resources for parts of the brain that are more important in our times.


Human society provides nowhere enough pressure for evolution to have effect (for good reasons), and our humanity timeline is nowhere near lengthy enough so far either. So I don’t think we are heading anywhere in that regards


Selective reproduction is evolutionary pressure; survival is mere prerequisite.


I agree but on the flip side it is interesting to consider what were the pressures and the timeline to bring us to where we are now.

Great ape brains are distinguished from monkey brains by their larger frontal and cerebellar lobes. The Neanderthals had bigger brains than us but smaller cerebella. And, most strikingly, modern humans have much bigger cerebella than “anatomically modern” Cro-Magnon humans of only 50,000 years ago (but relatively smaller cerebral hemispheres!)



How would you explain increasing average height in the US youth in the last 30 years? It can't be food availability, because 30 years ago people weren't hungry either


Absolutely food, or at most a hormonal change based on food. Possibly with s side-order of lack of a few of the childhood diseases that have been controlled with inoculations. 30 years is WAY too small a timeframe for human evolution. That works on the order of 50-100 generations, not a single generation.


Evolution doesn't happen quickly, but selection does. If something greatly changed how we select partners the past 100 years it could have an effect within a single generation generation. What could that be? Well, maybe feminism so women don't need to rely on a man to provide income, maybe socialist policies so nobody has to go hungry, and so on, plenty of reasons why some genes could be selected out of the gene pool that was previously fine there.


Lack of disease and disease treatment. Mostly Vaccines. I got sick like 2-3 times grand total growing up, and only once any more than mildly. My parents and their parents all got badly sick more than that growing up, and both have stories about being on deaths door, e.g. my dad got german measels at 17 and was bed ridden for a month; I didn’t get my last growrh spurt until I was 19.


I'm not sure about that. Being unable to anticipate context sounds terrible for pretty much any task. Having to think through every step of anything is terrible. Not being able to form sentences fluently and instead having to arrange them like puzzles is far too time consuming. The article literally is explaining how a large cerebellum is crucial for humans' high intelligence. Reallocating resources to other parts of the brain would make us stupider.


Having no context does allow for a fresh perspective...

Having to think things through slowly step by step may reveal errors other's glossed over...

the cerebellum is important. But maybe, since we really know less about the brain than we think we do, differently wired does not equate 'stupider'. It takes all sorts to make a world.

I agree with the language part, tedious... but maybe in certain situations it might be useful.



Being unable to anticipate context could reduce the prevalence of stereotyping, for instance.


What resources does it have to compete for? We basically have unlimited protein and carbs at this point.


Movement is very important to our health. People who don't move as often just die younger than those who don't.

I don't know how it would impact our evolution long term.



Doesn’t matter if you live to 55 or 95 if you have some kids when you’re 25.


I don't think that's correct; raising your kids - and potentially involvement with grandkids! - impacts long term results.


I imagine it's like performing a task in software ("big" brain, cerebrum) or hardware (cerebellum). One of them is faster and more efficient but very specialized. If it breaks you're left with the task being performed slower and less efficiently on the brain that can execute arbitrary code.

But I can't imagine any change in this split in responsibilities will happen on human relevant timescales. The cerebellum is probably not evolving very fast anyway, while the cerebrum might evolve comparatively faster but it has no pressure to do it. And "faster" still means tens of thousands of years.



These seem highly related to neurodivergence, such as ASD and ADHD.


Indeed, the article reminded me of the link between executive dysfunction (ADHD) and other problems like sensory processing disorders and postural sway.

Turns out studies have confirmed the overlap in these conditions and also linked it with reduced grey matter volume in the cerebellum:

https://psychcentral.com/adhd/postural-sway-adhd#postural-sw...



What a marvellous article. One thing I’ve never quite appreciated in neuroscience is how useful physical movement is as a debugging layer. During a task, in observing gaits, tremors, speed, accuracy, etc., you’re able to gain a deeper understanding of how cognition works for non-movement tasks. I guess cognition is, after all, still just movement, but through a conceptual plane instead of a physical one.


It's been argued that most, maybe all, of human cognition is based on a sort of folk-physics mental model with objects. We throw ideas into the ring to be chewed on and distilled and maybe brought in to practice or thrown out as useless. The linguistic metaphors we use, at least, to talk about ideas and abstractions, are never more than one, maybe two, steps away from a hand moving or rearranging something.


In fact the Glasgow coma scale (1), which gives you levels of human consciousness, is precicely that.

https://www.ncbi.nlm.nih.gov/books/NBK513298/



I'm confused at what exactly the link is here but I'm curious. Could you elaborate?


I was responding to this comment:

“The linguistic metaphors we use, at least, to talk about ideas and abstractions, are never more than one, maybe two, steps away from a hand moving or rearranging something”

And the Glasgow scale is exactly that, how people physically respond to stimulus with a body movement, and subsequent complex body movements - eg just humans vaguely deciding - is how we treat for levels of consciousness.

It’s not scientific at all is the point and is very “folk-physics mental model with objects” to the OPs original assertion



> folk-physics mental model with objects

Kind of like how object permanence [0] must be learned by babies, slowly worked into their internal model, even though "things don't usually vanish when they go behind other things" seems like reliably low-hanging fruit for any process (whether evolutionary or meddling demigod) to wire up as instinctual physics knowledge.

[0] https://en.wikipedia.org/wiki/Object_permanence



Animals like newborn deer fawns are born knowing how to walk, follow their mothers around, and run away from danger, although their legs are weak at first. So this makes me wonder if having to learn object permanence is just one more example of human babies being underdeveloped compared to those of other species of animals.


Alternatively, God made human babies superior in design from birth by giving them increased adaptability. The extra flexibility might require the specific movements to be trained, puts them behind on any specific goal at the start, and over time makes them exceed the innate capabilities of other species over time. By exceed, they might have a higher variety of behaviors or do them better (esp with tech).

In A.I., we see this with hard-coded, assembly FSM’s vs interpreters running high-level code. The former works with high-efficiency out of the box but can’t change behavior or improve much. The latter does nothing until it’s taught the extra knowledge (interpreted codes) which also might have new behaviors (functions). Many game developers switched from hard-coded assembly to interpreted code for AI agents for that reason.

So, it’s not under-developed: it’s a better-developed component with different tradeoffs.



> God made human babies superior in design from birth by giving them increased adaptability.

Holy assumptions batman. Homo sapiens aren't "superior", that's a terrible starting place for a hypothesis. There are plenty of metrics a human will never beat other species. Reaction time is a great starting place.



To me it makes very little sense for object permanence to be learned rather than innate - have a look at the "contradicting evidence" in the article you linked


See the book Metaphors We Live By for more on this. (It's a little heady and the authors lost me with some of their deeper claims in the second half of the book, but the first few chapters were fascinating)


The first video linked in the OP is astonishing to me, a layperson with no medical training.

It's giving me lots to think about with regard to motor impairment. (Basically, I'm reviewing my prejudices and nodding due better understanding the plight of afflicted individuals.) [0]

[0] https://www.youtube.com/watch?v=FFki8FtaByw



One thing I’ve never quite appreciated in neuroscience is how useful physical movement is as a debugging layer.

My sister, who is a choreographer, had some interesting views on how movement could be used as therapy. (Specifically, crawling, as a base-level movement.) I thought that was woo-woo, but later I read that there was some medical support for this.



"The cerebellum may also inspire artificial-intelligence approaches somewhat, especially approaches to robotics or other control, in that it may be be beneficial to include a fast feedforward-only predictive modeling step to control real-time actions..."

This is pretty widespread in controls, actually. The dominant control technique for legged robotics is model-predictive control ("MPC") which explicitly uses such a predictive model to determine the best inputs to the actuators.



Predictive models are also behind many of the SOTA results in modern reinforcement learning, although they are often used to generate fictive data from which a policy is learnt.


There are people without cerebellum. It affects thought and emotion.

https://www.npr.org/sections/health-shots/2015/03/16/3927897...



More famously, Joey Ramone didn't have one:

https://www.youtube.com/watch?v=rjWZJQyykeM



One of the greatest rhymes is rock history!

   Now I guess I'll have to tell 'em
   That I got no cerebellum


But even more strangely, he did have this:

> He was born with a parasitic twin growing out of his back, which was incompletely formed and surgically removed



Can you provide a reference for this? I searched and found nothing.


I think it was a joke - The Ramones have a song with lyrics about a missing cerebellum, but it's a song about a fictional character with a lobotomy.


“It’s a big job, but it’s not easy.” - Is this a problem? The way I understand this sentence is that: "It is a big job, and you might be excited about it, but it is not easy".

When talking to someone with whom we have a good rapport and good context, this conversation can be done faster. Basically, I/O is slower than the CPU.

Some of the other sentences can also be explained in similar manner.



> The cerebellum has a repeated, almost crystal-like neural structure:

As a software engineer who did neurosurgery residency, my intuition/guess is that the cerebellum is kind of like the FPGA of the brain.

The cerebrum is great for doing very complicated novel tasks, but it takes time and energy. The cerebellum on the other hand is specialized in being able to encode common tasks so it can do them with quickly and efficiently. A lot of our motor learning is in fact wiring the cerebellum correctly.

This can actually lead to an interesting amnesia, where a person can learn a skill (cerebellum) but not remember learning the skill (cerebrum). So you could end up with a person who would think that he had never seen a basketball hoop or basketball before but could be doing layups, dunks, and 3 pointers with ease.



> So you could end up with a person who would think that he had never seen a basketball hoop or basketball before but could be doing layups, dunks, and 3 pointers with ease.

I know a lot of people with the opposite problem.



Thanks for that. Don’t often laugh at a comment


What a great comment, seriously.

It just made me start thinking and then I realized perhaps another analogy is a just in time compiler where code or skills used often enough, your body manages to compile into native neurological code and stores that appropriately.



It is always funny to see brain metaphors morph to resemble our current stage of technological development, as the years go by. First it was anima, or hydraulic analogies of spirits and fluid moving through the body. Then it was clocks, the mechanistic processes of the brain. And so on and so on until today we metaphorize the brain to be like computer hardware. In vogue as well is comparing it to neural networks, due to the influence of machine learning and AI today. I wonder what metaphors we will come up with next.

http://mechanism.ucsd.edu/teaching/w12/philneuro/metaphorsan...



I always liken this process of reality being a fractal boundary of mandelbrot and our attempts to understand it through language and metaphors as a way to approximate and fit that boundary. Consider the successive colored stripes like a updated and accurate metaphors in the following video

https://youtu.be/u_P83LcI8Oc?si=ObkNyUfCCSUCb0Vt



The next metaphor would be quantum computers.

There are a few that have started suggesting quantum mechanics playing a large role in cognition, but very few take them seriously (obviously it has an effect, but likely much can be understood more classically, etc).

The fact that few are moving toward that style of thinking seems to give a bit more credibility to NNs being closer to the correct model. If spiking NNs take off more, we'll probably see more arguments around that, and if Blue Brain's full in-silico modeling takes off we may see the succinct description given by those studies used to describe ideas. However, to first approximation, NNs and spiking NNs aren't really a bad way to reason about large descriptions of brain dynamics, in many circumstances.



There’s zero evidence that there’s anything more quantum mechanical about the brain than a brick. IE: Physical and chemical interactions that emerge from quantum behavior, but can be modeled just fine without QM.

Instead people seem to just equate two different complex things they don’t understand with each other.



Your comment has a feel to it of a rebuttal; but I hope it's clear that the original comment has effectively this same stancen as this.


I didn’t disagree with the what you said, but I think some people may have misinterpreted it.


Though it's not like we're flitting from one bad analogy to another. Hydraulics are a great metaphor for understanding how computers work, for example.


Some 50 year old cartoons about said topic... https://github.com/larsbrinkhoff/crunchly


I always thought neural networks were an example of the analogy working the other direction. Instead of modeling our brain on the technology of the time, we chose to model the next technology on how we think our brains work?


I wonder if each iteration gets closer as we go


Heh not sure if you're aware, but our brain seems to have a special treatment or logic for contextualizing "high technology", as indicated by one well-documented failure modes: the "influencing machine" is a feature of schizophrenia which features a delusion that contemporary high technology (magnets, pneumatics, gears, mind-control drugs, satellites, prob AI now, etc) is being used by mysterious attackers to control the sufferers body and mind: https://en.m.wikipedia.org/wiki/On_the_Origin_of_the_%22Infl...

Though not mentioned in the post, schizophrenia (oddly enough) is also tied to cerebellar dysfunction: https://neuro.psychiatryonline.org/doi/10.1176/jnp.12.2.193#...



But air looms are real: https://www.theairloom.org/


In all honesty, I believe the reverse is true. Our technology seems modeled after humans and the environment we inhabit. Airplanes being glorified birds, wheels being glorified feet, computers being glorified brains or neural networks...well.


It's hard to imagine two objects in the vehicle-ground-interface conceptual space much farther away from one another than feet and wheels.


Try to spend more time imagining and entertaining that thought. They both have the same function but execute it differently.


Both are spot on examples on what the cerebellum does. If I may, a third example/analogy that comes to mind is cache memory or L2ARC drives, at least that’s how I have it stored in my mind (pun intended) :-)


"The brain is like a computer that"-style analogies are rarely fitting, or so vague as to being almost useless. My fridge is an L2 cache for food I want to eat soon.


What's wrong with the fridge analogy?

It's an analogy for a reason. It bothers me when people combat analogies so incredibly hard. Of course, it is not really fitting or the same thing - it's an analogy.



It would be a useful analogy for someone intricately familiar with computers but who was only sort of vaguely familiar with the concept of eating, has thought about houses only on occasion, and knows about refrigerators only insofar as they’re a food-related thing inside a house.


> My fridge is an L2 cache for food I want to eat soon.

A less-volatile form of L2 cache.

Food pocketed in my cheeks are the CPU registers?



> My fridge is an L2 cache for food I want to eat soon.

I would think fridge is RAM, L2 is table, L1 is plate. (I am deliberately ignoring the pun potential for Cold Storage.)

But other than bickering about the exact mapping, I don't see the problem with that analogy?



Cold storage is my freezer. There's often a delay between when I need it and when it is ready.


AC outlet is the clock speed!


Some fridges are like magnetic tapes, with files from 1978.


Yeah, the discussion of classical conditioning led me to the same sort of conclusion. The fact that cerebelum has been growing faster in human like primates as a percentage of our already larger brains, well, I can't help but think that all our social reactions, drives, and complex needs are essentially some kind of cooption of this FPGA for optimization purposes. Like cerebrum does training and cerebelum et al do evaluation.


This comment is the best example of why I come to read comments in HN.


I am way more interested in how you gave up $800,000+ a year to do software engineering.


Neurosurgery residency is very, very, very intense. Unfortunately not everyone finishes. When I was in medical school, I remember some general surgery residents quiting after falling asleep in the middle of an operation; another neurosurgery resident I rotated with was pretty miserable, I found out later he quit. I would have liked to be a neurosurgeon, but simply didn’t have the physical stamina.

I ended up becoming a radiologist. Never heard a radiology resident quiting, although have seen a few residents get kicked out for mental issues or gross incompetence.



> gross incompetence

How on Earth do you get to "resident" and have "gross incompetence"?

There are sooo many gates before getting to be a "resident" that this completely baffles me.



Undergrad/premed: live with family, have 100% 24/7 familial support of your education, living at home with all of your essential basic living needs taken care of.

Residency: move to a different location away from family, no longer living in a dormitory environment with the expectations associated with being a student but are now a real adult making your way in the world. Suddenly you have to make the whole package work on your own without laundry/cooking/mental health/financial support.

Now you can no longer put 100% of yourself into your studies, but instead can only manage the 60 or 70% that most people can muster when they have to actually maintain their physical existence while also meeting their professional expectations.



It happens. Often incompetence is specific to one specialty - neurosurgery is competitive, so you can assume that anyone who gets it has at least adequate grades/test scores. But that doesn't mean that they're clinically worth a damn.

I'm an anesthesiologist. There are people who wash out because they just don't have the temperament for it. They're not dumb, they're not even bad doctors, they just aren't mentally equipped to sit back and relax while running a code.



Obviously not OP and not in this position, but I have worked with people who left surgical training positions and their reasons were health and a realisation that they would miss every family milestone, never get a real break and have every part of their life revolve around their job with the money and god-like power not compensating for that.

Obviously that’s one side of the equation, I don’t have any surgeon friends I know well enough to give the opposing view.



OP said "did" implying finished. It's a six year residency minimum, though the first year is general surgery. It's not often people do the whole damn thing, then decide to bail. Usually it's after 2 or 3 years

Though some people are less burdened by golden handcuffs and sunk-cost fallacy

Plus, I've never met a happy (or sane) neurosurgeon



I'm in no way even close to being in the medical field, but I could see it as an unreconcilable dichotomy between the hippocratic oath and the fact you're going to be causing damage no matter what.

Sure, the hemmhorage needs to be fixed, so you're preventing further damage, but every cut may cause unknown ramifications. Anyway, I'm postulating a neurosurgeon would be aware of this, and have to carry that around with them.



Medicine as a career offers immense personal fulfillment, variety, human interaction, and prestige at the expense of dealing with difficult outcomes and ranges of personal sacrifice -- neurosurgery as a specialty just takes all of these to their extremes.

I value the former and find ways to discount the latter. So I am very happy. Though sane or not would be up to others.



Some can do it. Are they really different than the rest?? Perhaps higher tolerance to stress or even thriving in it?


$800,000 is a lot of money, even after taxes you'd only have to work a few years at that salary before you could live reasonably comfortably for the rest of your life without working at all.

Seems perfectly reasonably to switch to a lower stress career at some point.



The personalities attracted to the role aren’t really amenable to thinking that way.


There's also lifestyle inflation, and of course retiring early rarely impresses your spouse.


People do change, especially when exposed a long time to stressful environnement. Ask post-burnout fellow. Fortunately most re evaluate their life before going to burn out.


Assuming they only got as far as their residency (and didn't end up as attending physician), it's possible that they didn't see themselves spending a full 7 years as a resident doctor (making under $100k/year working 80+ hour weeks) only to spend the rest of their lives doing more of the same except with a much higher salary. If they already graduated their residency then the reasoning is the same except it's be a much harder decision because of the sunk cost.


I thought the cerebellum predated the rest ?

Out of curiosity, having the kind of weird symptoms not far from what you describe as weird amnesia, do you know any books / resources to understand advanced brain neurology like this ?

Thanks in advance





Thanks a lot


> A lot of our motor learning is in fact wiring the cerebellum correctly.

That got me interested: since the wiring is so long (from limbs to cerebellum), what kinds of motor learning?

Do we know, if cerebellum needs more energy than the rest of the brain?



What do you mean? The cerebellum is closer to the spinal cord than the rest of the brains. And there's no learning happening anywhere but the brain, vertebrates don't have distributed central nervous system like octopuses do. The only thing vertebrate limbs can do on their own are certain hardcoded reflex actions.


So the spinal cord is only a wire and, besides reflexes, there is no processing done anywhere else, but the brain?

How about neurons in the gut, I've read somewhere that intestines are lined up with some neural "mesh". Is that only for signal transmission?



> That got me interested: since the wiring is so long (from limbs to cerebellum), what kinds of motor learning?

I'd imagine it's things like training a dominant hand. The skills required for precise motor control, to produce the right movements for e.g. handwriting. Since the wiring is so long, and feedback is delayed, you need to be able to precalculate these movements.

Also imagine how e.g. an intent to move somewhere actually gets implemented. You don't always have to think about each individual step of walking, or pay explicit attention to things like your sense of balance. You probably don't even have to choose that you're going to walk, or think about how to get up. When you want to go somewhere, you just do it, and somehow it's all calculated for you and happens.

When you try to move a specific limb, how do you know which muscles correspond to that limb? In fact, how many of those muscles can you even individually address? You can learn to individually address them, but I bet you don't come with that ability by default.

Then of course there's the question of what even causes your limbs to move once you will them to move.



This is great and I agree with the reprogrammable part.

However think of it more like switch or a router between the PNS/CNS (minus vision) and the “higher brain“ longer term planning systems.



This actually happened to me when I lost most of my memory in an accident. I couldn’t even remember my job tasks enough to describe how to do them. They were repetitive, though, with it all in my intuition or muscle memory. I could do the stuff without knowing why I was doing it. My friends and I joked I was the Jason Borne of the place. It’s a strange feeling, too, because I could feel like something was missing as I acted.

I was also a supervisor in a competitive company with people gunning for my job. The Recession was not a good time to be disabled. I feared demotion or termination. I hid my injury while trying to get back in mental shape. They chalked up occasional forgetfulness to the stress we were all under and my constant partying. (That was before I was in a relationship with Jesus Christ and gave up those sins.)

Eventually, one of my recovery strategies was to take note of the specific things I did on instinct, think about why I did them, and re-create the mental models. I’d also just ask people how they did things and what they learned works best. Many were people I trained with my prior techniques who re-taught them to me. By practicing those, I both re-learned my mental model of how to do the job and connected it with the instinctual wiring.

There were one or two other trucks that helped. That was the part relevant to your comment, though. I never quite got back to my old level of performance. Capitalizing on how intuitive memory is different than conscious memory has helped me in many places ever since. I just keep breaking it into simple things that I repeat over and over to. Then, keep applying those pieces in new ways to keep my brain fresh.



Its striated structure matches sequential timed operations.


What signals cause rewiring of the cerebellum?

Is there any way to induce that state exogenously?

Some thoughts include dopamine / pain receptor reinforced learning. Maybe there's a faster way?



Psilocybin and other psychedelics may be that exogenous agent, they release Brain-derived neurotrophic factor (BDNF) [0], which plausibly could cause “rewiring of the cerebellum” [1][2], and may even do this with sub-perceptible (micro)doses [3].

[0] https://pubmed.ncbi.nlm.nih.gov/37280397/

[1] https://www.nature.com/articles/s41386-022-01389-z

[2] https://www.frontiersin.org/articles/10.3389/fpsyt.2021.7246...

[3] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8033605/



> What signals cause rewiring of the cerebellum?

Any signal, that's the point. The cerebellum learns the patterns of signals involved in motor control.

This is why you train your skills by doing the correct movement over and over again. Once the cerebellum has adjusted to the correct motor signal patterns the correct movement will become effortless.



> This is why you train your skills by doing the correct movement over and over again.

Yes, but can you make this to go faster?



You'd have to apply an adverse stimulus in under a ~5ms threshold to actions that were 'wrong'. It would depend on the exact task you're trying to do though. That would then cause other areas to potentiate that specific movement/firing as incorrect.

Its a active area of research in sports and DoD. As you'd theoretically be able to train marksmen and athletes at a much faster and better rate. However, even really really fast computers aren't quite fast enough to apply the adverse stimulus to 'wrong' movements/firing.

Also, your computer better be really accurate and never mess up, or that person is going to have a hell of a time retraining their brain. Also, their brain may view that the clouds/temperature/itchy grass/breakfast are the reasons for the adverse stimulus, as this is all happening at in subconscious time frame. So, good luck there.



This is great info, thanks!


Pain or satisfaction, probably.

When a motion "feels good", or is painful, probably means you learn. So chase those.



The pattern of muscle activation timing in the correct move form needs to be figured out by exploration of the space.


We don't typically efficiently explore the space. This is why coaches exist.

The feedback loops are often long. Getting a review on a performance, etc.

If a device were set up to trigger pain within some milliseconds of an incorrect activation, surely we could speed this up?



Train harder. Develop habits.


I think we're in the "punch card" phase of biology. I'd be willing to bet (timeline uncertain) that there will be shortcuts to this process.

For now, opportunity cost rules the day. I'm 120% maxed out.



I’m going to say it’s all about the glutamate.

https://www.nature.com/articles/s41467-023-38475-9



mk-ultra project monarch


Do you think this could be responsible for some part of "muscle memory"? Sometimes when I switch to other identities (DID), they can forget steps. That presumably happens because those steps are automatic for me, so I don't have to think about them, but when others try to do the same thing (not think about them), the automatic thing doesn't happen, and they end up missing the step entirely. They have to remind themselves to think consciously even about things that are normally automatic for me, because they don't have the same muscle memory.

I also wonder if neurodivergency affects this region. I'm autistic, so my brain is detail-oriented. Sometimes it feels like I can perceive "neural circuits" that are implemented by the so-called FPGA. When I have a compulsive behavior or trigger, I can sometimes observe the entire execution flow, not just the result. I think that's neat.



[flagged]



What makes you say that? Please have something more creative than "dissociative identities can't be aware of each other".


HN may enjoy this old Christmas special I wrote on Ethan, a teenager born without a cerebellum https://www.economist.com/christmas-specials/2018/12/18/the-...


Piercing the Economist's veil of anonymity eh?


My overall read on this article is that its claims are probably overconfident. Like, it seems interesting, but like she's seeing a few results and making big claims about how the cerebellum plays into overall cognition, and my general sense is that lots of humility is usually warranted here: that simple and decisive statements usually turn out to be riddled with provisos and unexplained behavior.


After sharing this piece to a neuroscientist coworker, I got the same feedback. Interesting article, but should probably be taken with a grain of salt when the author interpolate from studies.


You are correct. We don't really know what's going on. Every claim can be met with an equally emphatic opposite claim with equally compelling evidence by someone cherry picking the "correct" studies and listening to the "correct" people. What we call neuroscience is still in a pre-Newtonian era.


Did you also read the comment by Steve Byrens with his own theory on what the Cerebellum does, and the author's reply of "seems right" ?!


I missed this, but after reading Steve’s comment, I don’t see much in his “little time machine” theory of function that conflicts with the original article’s ideas, except on the classical conditioning point.


That's an interesting perspective. While I agree that the pace of advancements in neuroscience is slower compared to AI, I think it's important to note that understanding the brain is a fundamentally different problem than building intelligent machines. The human brain is an incredibly complex system with billions of interconnected neurons, and we still have a long way to go in terms of fully understanding how it works.

AI, on the other hand, is designed to solve specific problems efficiently, and it can be engineered to mimic certain aspects of human cognition without necessarily needing to understand the underlying mechanisms.

While it's possible that AI could eventually help us better understand the brain, I believe that advancements in neuroscience will continue to be crucial for unlocking the full potential of AI. Understanding how the brain processes information, learns, and makes decisions could lead to the development of more sophisticated and human-like AI systems.



Hi ChatGPT


Hi ChatGPT!


I like the beginning of the article quite a lot for giving an overview of the cerebellum, teaching that it is the home of unconscious learning, but to me it goes into weak speculations quite quickly, first with the 'Purkinje cells learn individually but not other found do that' leads to 'neuron connectivity is not enough to simulate' brain activity while knowing that higher level mental activity exists without cerebellum. Also that cerebellum might be the home of measurement just beacuse Purkinje cell can time a reaction, then based on the headlines (lost interest to attentive reading) speculating that it is the place for anticipation and sensing. Having a feeling that wants to expand cerebellum onto as much as fantasy stretches. The whole cerebellum topic sounds fascinating without 'rethinking intelligence' completely.


I find this particularly interesting:

The brain is not like a neural network where the only thing that is “learned” or “updated” is the weights between neurons. At least some learning evidently happens within individual neurons.

That’s bad news for anyone hoping to simulate a brain digitally. It means there’s a lot more relevant stuff to simulate (like the learning that goes on within cells) than the connectionist paradigm of treating each biological neuron like a neural-net “neuron” would imply, and thus the computational requirements of simulating a brain are higher — maybe vastly higher — than connectionists hope.

I had heard this as well close to ten years ago on some NPR radio show: That researchers had reasons to suspect that a whole lot more processing happens within the synapses themselves.



If you're curious about the pronunciation of Purkinje like me: In Czech, [purkɪɲɛ] (spelled Purkyně); in English, per-kin-jee [/pɝkɪnd͡ʒiː/].

https://www.youtube.com/watch?v=23MFfOsTDIs



Perkingee, when you read one orthography with the rules of another orthography. Curiously, also happens to "Czech" or "Czechia" itself which I've been shocked to learn some pronounce chechia instead of checkia, explaining the baffling confusion with Chechnya.


Why must mental functions be localized to physical components?


They probably aren't, at least at the level we often teach. Much of our knowledge about the brain comes from observing people who have pieces missing and seeing how their behaviour differs from a normal adult or putting people in an fMRI scanner and saying "wow, that area used a lot of oxygen compared to baseline". This, and a scientist's nature to classify things, led to a lot of overoptimistic categorization of brain function to specific regions. As neuroscience has matured the field has grown to recognize a more nuanced view that most computation in the brain is more distributed than we first assumed, and different areas are often involved in overlapping functions. It can also change over time or after extreme brain trauma. But it's not correct to say it's fully distributed either. The honest answer is we still have an extremely poor understanding of how the brain works.


My guess: because it's useful with following analogy.

We use text embeddings to represent concepts written words. They lack the nuance, when the same word has different meaning in different concepts. LLM's use text embeddings and enrich it with the Attention mechanism.

For words, that in reality are used to represent a single concept, a text embedding is working perfectly on it's own.

For those concepts that are context dependent, we're using the attention mechanism to gently guide the text embedding closer to the intended, identified by surrounding words, meaning. That's the role of the Value vector of the K, Q, V triplet in the Attention mechanism to be precise.

So, this is a simplistic approach, which corresponds to the "first approximation", which could be good enough for some cases. We don't know which exactly yet, but we'll know, once enough evidence is given to the contrary.

It's not a good model, but a very good approach in order to do research in a stepwise manner. With time, it'll get more and more nuanced, one approximation at a time.



Are you suggesting a metaphysical structure? I don't understand what you are getting at.


Even if you are a full materialist it's fallacious to assume there is one "part" that does something, like it's a factory assembly. Instead it might be a function of the composition of brain parts.

It's like asking which part of a bat makes it fly. The wings? Well kind of, but you need more than that. I guess you can fly without feet... it's just not a well formed question.



If you ask question a bit differently, then it's not a mystery at all. Why brain parts that have neural structure, which is conductive to fine and agile motor control, perform motor control?


Why is it in a part? Which part of the violin makes it sound in tune?


> Even if you are a full materialist it's fallacious to assume there is one "part" that does something, like it's a factory assembly. Instead it might be a function of the composition of brain parts.

Its true that a particular function may mot be localizable more specifically than the brain (or even the whole body), because defining it as a distinct function may not reflect the organization of conponents within the body. But that's still performed by a defined physical system, and there are still sub-functions necessary to perform that function that are localizable to narrower components.

> It's like asking which part of a bat makes it fly.

Its like that in that we absolutely can describe specific parts of the bat and what each contributes to flight.



Because physical components are all that exists.


What other sort are there? Imaginary components? Metaphorical components?


> While humans don’t have these kinds of sensory systems

(He's talking about sensing the 3D environment using electric fields)

I wonder whether binaural hearing is such a sensory system. You can blindfold someone, then lead them into a space. They can tell whether they're outdoors, or in a small, bare room, or a concert hall, or a room with furniture and drapes. Perhaps they can tell whether they're near or far from a wall, and in which direction.



A sighted person who is not blindfolded can do the same thing, relying on reconstructing their 3D environment from the way electromagnetic radiation affects the rhodopsin in their retinas rather than the movement of hairs in their cochlea due to air pressure changes over time, and integrating the differences between a spatially separate pair of detectors.


Possibly of interest: cerebellum involved in asd https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8998980/


Timing is also needed in speech, which is quite fast compared to conscious desponse; so the problem with conjunctions might just be that the cerebellum anticipates them.

Note that time perception is distorted so that we don't notice how slow conscious response is.

So the feeling of "competence" is when your cerebellum is anticipating correctly.



> In total, the cerebellum contains 80% of all neurons!

Apples and oranges, but that's so reminiscent of MLPs in Transformers. A similarly large fraction of the weights in transformers come from the MLPs in each layer.



Well, MLPs is the actual neural network (the approximator), whereas the attention (the rest) is more of a text to relevant embedding extractor


So classical conditioning in humans requires special cells in the Cerebellum (Purkinje cells), which can even do single-cell learning. Which artificial neurons can't do, as only weights (artificial synapses) are updated. So how is classical conditioning actually implemented in artificial neural networks? I assume there is some minimum network which makes it work.


I always assumed ANN simply is a universal and learnable function approximator. That is, there is no direct equivalent of classical conditioning. Only data in, expected output pairs


There must be a minimal ANN architecture which implements classical conditioning. This architecture could be quite limited in what it can learn compared to ANNs in general. Similar to how feed-forward networks are limited compared to RNNs.


You can train single layer neural nets. Not very useful, but they do exist.

There are certain ANN architectures that relied on, essentially, classical conditioning based on Hobbian learning rules and variants thereof. Kohonen self-organizing maps are an example of that.

Not that such historical systems are popular today, though.



> special cells in the Cerebellum (Purkinje cells), which can even do single-cell learning.

As a neuroscience novice, I've always assumed that something about the gross model of the neuron, as far as I understand it, cannot be correct or is incomplete. Because I never understood why single cells aren't already performing single cell learning, given that there are always far more dendrites than axons.

Since this characteristic turns each neuron into a lossy compression function, there has to be some process by which certain dendrites are considered 'more important' carriers of information than others, in order to make a tie-breaking decision about what to include in the compressed signal, and what to throw out, as the cell decedes whether or not to transmit an impulse (including whether or not to override prior inhibitory signals) back up the axon.



Well, not all incoming signals get the same weight for the outcoming signal, as the dendrites are e.g. more or less close to the part of the cell where the spikes are generated. But this computation is just analogous to the connection weights together with the activation function in artificial neural networks. That's not what enables classical conditioning in single Cerebellum cells.


At a rough guess, each Purkinje cell is an MLP unto itself, and as the article states, this implies some orders of magnitude more computation for a brain simulation. I also heard something like 'a neuron is an MLP unto itself' on the Brain Inspired podcast. It's likely we've vastly underestimated the processing power of the brain.


Well classical conditioning kind of only makes sense in the context of an agent that is receiving inputs and taking actions on them. Many neural networks don't solve problems of that type, and so have no need for classical conditioning.

But when you do have such a problem conditioning is not very complicated. The normal algorithms and neural structures are designed to learn stuff like "when a given input happens a certain action must be taken" and thats all you really need for conditioning. How it actually does it? Well I guess with gradient descent it would work something like this: Every time there is a puff of air the network will be like "damn I should have blinked to avoid this" and so it makes its current internal statement a little more likely to lead to blinking. Gradually as it happens more times it will learn a strong association for the ringing bell or whatever.

A small RNN could learn this.



Yeah. It's just not quite clear what the minimal example for such a network would be. I assume you have N inputs and one output. The output always active when input 1 is active, otherwise the output is inactive. So the other inputs are ignored. However, when one of those other inputs, x, tends to be temporally correlated with input 1, after a while x will generate an output upon activation even if input 1 isn't active. If x becomes decorrelated with input 1, x will again get ignored. Not sure what the simplest network architecture looks like that implements this behavior.


> The output always active when input 1 is active.

Neural networks don't have instinctive behavior like that.



Things can be hardwired


This article made me wonder if dyspraxia was related to impaired or inhibited cerebellum function. A cursory search yields at least one article supports the idea:

> Results revealed that children with DCD had reduced grey matter volume in several regions, namely: the brainstem, right/left crus I, right crus II, left VI, right VIIb, and right VIIIa lobules



Is cerebellum responsible for muscle memory?


Not really, but it can be involved for some tasks. 'Muscle Memory' is a bit of a complex thing. It's not so much the firing of the neurons, as much as it is the timing of that firing. Your reaction time is at the ~5ms level. Much longer than the muscles need to move in concert to, say, hit a 3-pointer. Controlling all of that can take place all the way from the brain down to the ganglia of the spinal chord. Drinking a cup of tea while reading will mostly take place before the brain gets a chance to intervene, for example. While riding a bike will involve more of the brain. I want to stress, it's a complex and not well studied area of active research.


It is interesting though. Do you know if there is a name for this particular area of neuroscience?


There is not, it would just be general neuroscience. I'm unaware of specific labs either. Google would be your best friend in terms of trying to find specific researchers and in reaching out to them.


I’ve many times had the experience of trying to debug someone’s computer problem, and trying to describe how to fix something, I couldn’t think of what to do in words. So I said, “my hands know where the answer is” and once I had the mouse I clicked around and did the task fairly quickly. I wonder if that was the cerebellum solving the problem for me?


Amazing article, and for a layperson who’s been reading a lot of neuroscience, the perfect level of complexity. I love an article that makes you (internally) shout “why haven’t I wondered about that before?” over and over, so thanks for that. A compliment as to clarity of purpose, I suppose.

Materially;

  That’s bad news for anyone hoping to simulate a brain digitally. It means there’s a lot more relevant stuff to simulate (like the learning that goes on within cells) than the connectionist paradigm of treating each biological neuron like a neural-net “neuron” would imply, and thus the computational requirements of simulating a brain are higher — maybe vastly higher — than connectionists hope.
I get where she’s coming from, and she’s not wrong, but it seems like unnecessary detour to dunk on another AI “camp” in the field for drama points and satisfaction - the Marcus Maneuver, if you will. Connectionism isn’t a cult or an institution, it’s a paradigm that emphasis the utility of big nets of interconnected smaller pieces. Any self-avowed connectionist (are there any left? Honest question) could just retreat to “ok well it’s networks of brain cells plus smaller intracellular networks” and keep their paradigm. And all we can say to that is “ugh, I guess”, as far as I can tell!


At least one self-avowed professional connectionist here. I was coming to make similar critique here.

Connectionism isn't and never was about trying to simulate the biological neural networks and other anatomy. As you said, it's about emphasizing the network and network emergent phenomena over isolated pieces (e.g. "grandmother neurons" or strict localization of brain function). At the information processing level the contrast is to classicism/symbolicism that tries to explain cognition as atomic and modular operations on symbols.



Good to hear from one, thanks! My big insight when I first started really getting into ai was “we should unite the two camps now!”, only to find out we’ve basically been doing that since the 90s. So I'm glad the debate survives a bit to this day!

Honestly, after 2023 I think we’re all connectionist in a way lol, except for the old guard, of which Chomsky might be the only (adjacent) one left. Godspeed to Chomsky, I honestly wouldn’t be surprised if he has one last scientific revolution left in him



I was also stumped by this exact quote. The whole article was in the best spirit, until this.

It's a model we have, it will get updated in order to be more useful. Every engineering field builds a model of reality. Who are the connectionists? Is this some kind of "they people" label for, what causes fear in a typical layperson?



Precisely, connectionism in modern AI boils down to the idea that learning should be expressed in terms of DAGs that are composed of simpler units. It’s quite likely that the units that are currently used are too abstract, but this doesn’t necessarily mean the paradigm itself is flawed.


I don't think connectionism is restricted to acyclic graphs. Or even graphs in general. But you're right that the connectionism as an approach is more abstract than just simulating neuron behavior.


> It seems weird to have a whole separate organ for “make motor and cognitive skills work somewhat better.”

Does it? I’d think this was an absolute gimme.



Do we have other examples of this?

There is a second kidney, lung, eye etc, but a hot spare isn’t quite the same as a completely different structure.



Your stomach and intestines, they aren't all necessary but they help. And you can see their structure varying significantly in different animals.

Or a tail, an entire extra limb just to keep balance a bit better.

Or ears, you can hear without them but they help capture sound a bit better.



Come on, no one knows... after 10 years in computation neuroscience + experimental neuroscience. The slope of neuroscience advances in very very low (some will say negative). The slope of AI advances in much much higher. --> we will get an AI to understand the brain an explain to us, it will not come from a lab.

Just my point of view



Anyone have cogsci or neurosci reading recs?


https://www.amazon.com/Principles-Neural-Science-Fifth-Kande...

As Jackson is to E&M, Kandel is to neuro. It's the text.



Kandel is superb but it's written for grad students and advanced undergrads with a solid biology foundation. A typical undergrad neurosci textbook would be an easier start for a non-biology person.


Ask and ye shall receive:

- cognitivescience.substack.com

- seantrott.substack.com

- understandingai.substack.com

- neuralnews.substack.com

- brain2mind.substack.com

- biomedworks.substack.com



>"ah yes, the thinking happens in the cerebellum.”

Why do we NOT think so? It must be capable of thinking alone, as only mammals have the neocortex. It would only be logical to expect the more universal cognitive abilities to happen in the cerebellum, and only those that are specific to mammals alone to happen in the neocortex.



Tldr; it's classical conditioning, i.e. the base firmware before installing the higher intelligence and education.


So can it be compared to simd (single instruction multiple data)? An accelerator for walking? Speculative execution?


I'm betting a higher frequency, lower latency, high throughput dedicated part. The slow stuff thinks for longer and sends a "goal" to this part, which translates that into a series of higher frequency (comparing to the slow part) activations, which are fast enough to produce fluid movement. Probably in the order of few milliseconds






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com