(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40480913

这是文本的简化版本: 文学编程的发明者唐纳德·高德纳 (Donald Knuth) 旨在通过合理的顺序向读者传达其代码背后的逻辑。 他制定了一个巧妙的预处理计划,以满足编译器的需求和旧编程语言的清晰度要求,其中存在严格的排序规则。 现代编程语言提供了更大的灵活性,但仍然可以从有组织的演示中受益。 代码阅读可能涉及在不同深度解释其功能。 在基本层面上,人们掌握一段代码执行的一般操作。 更高级的理解需要理解细节,如函数调用、变量意义、cookie 解释等,并解决代码库中潜在的差距或矛盾。 代码解释和其他认知过程(例如阅读正式文档、神经成像技术和预测代码输出)之间的比较研究揭示了相似性和区别。 使用功能磁共振成像(fMRI)的研究人员发现,当人们检查代码时,语言相关区域的激活程度最低,而选择依赖多需求网络。 与口头或书面语言交流相比,编码和阅读代码提出了独特的挑战。 理解编程概念需要可视化解决问题的策略,遵守精确的语法,并掌握各种库和框架。 尽管编码和语言交流都需要心理处理,但前者往往会引发独特的神经反应模式。

相关文章

原文


I feel like this is akin to asking someone to read a fairytale and to tell you how many times a person crossed a bridge in it, or what they carry in their hand in the end, or where the apple is when all is done.

I don’t think this is about understand what is written, it’s about figuring out what that which is written means.



That’s an excellent point.

I’ve realized that I sometimes read code in more of a “story” mode, where I’m just looking for the gist of what’s happening and what it is “about” rather than the nuts and bolts of how it actually works.



“Story” mode is why I like to organize my code top-down, often with definitions appearing after their usages.

It makes the code read more like a story from top to bottom.

Some like to organize their code such that any term always appears before it is used. Like a math proof or something. I find this somewhat pleasing but unhelpful for understanding the “story” you mention.



That’s so interesting. I find myself organizing code to have minimum-understandable units, which ends up following the narrowing project->folder->file… abstraction all the way down - the end product resembles more a taxonomy than a story. I love your metaphor though and I am going to try it right away.



Both programs and math proofs are allowed to be written in story mode where definitions can come after their usages. Unless you write in C. There's no story mode in C :-)



The clue is in the next sentence.

If you look for cweb by itself, you might get weird results, because there are only so many four letter acronyms / words. Look for 'literate programming cweb' or just 'literate programming'.



> “Story” mode is why I like to organize my code top-down, often with definitions appearing after their usages.

Yes, it drives me nuts that e.g. Python makes you put the definitions before their use!



I'm not sure what you have in mind, but if you put your "main" Python code inside a "main()" function, you can put that main() wherever you want (like at the top). You just have to call it at the bottom of the file, below anything it refers to.



Yeah: to me, it really depends on the story being told, and frankly even then it might depend a bit on the proclivities and interests of the person reading the story.

The one thing I'd say almost never works for reading code, though, is the thing people seem to insist is best practice: scattering everything into a hundred separate files with no clear order at all!

Trying to understand that code is like sitting down to an encyclopedia and then being asked to somehow use it to answer to the ultimate question of life, the universe, and everything.

The worst is when you grep the code for "main" in an attempt to at least come to your own idea of order, and it turns out that a bunch of files have their own main functions for various unit tests in files the build system just happens to ignore :/.



That's part of why Knuth came up with Literate Programming: he wanted to be able to present the reader of his code with a coherent order that makes sense to understand the project.

Given the strict formal requirements on order of languages like Pascal, Knuth had to come up with a clever preprocessing scheme to satisfy both the reader and the compiler. (See his tool, https://en.wikipedia.org/wiki/Web_(programming_system) )

Modern languages are a bit more flexible, but can still benefit from some re-ordering.

> The worst is when you grep the code for "main" in an attempt to at least come to your own idea of order, and it turns out that a bunch of files have their own main functions for various unit tests in files the build system just happens to ignore :/.

I used to use tools like grep a lot, but I found that my productivity really improved a lot when I set up an environment that supports 'jump to definition' and 'jump to references' etc.



If you read code and don't think about what it's doing, are you really "reading" it?

Reading code without running it in your head feels like reading prose without actually comprehending it.



You can read (scan) code to get the general gist of it. It's more at the level of "this code looks at the pathname, then some cookies, and then either calls the signin function or returns True". That's about the same level where I read most prose. It's enough to answer the question "what does this do" (answer: "calls signin if you aren't logged in but should be") or "how complex is this" but not enough to modify the code or understand the intricacies of it.

What you are referring to is simulating the execution in my head to figure out which execution flows exist, figuring out which cases aren't covered, what the function calls actually do, what the significance of those cookies is, etc. That's similar to how I read contracts. But unless I have to write a literary analysis or somebody asks me to find all the plotholes and inconsistencies I don't read normal prose that way.

It might be interesting if there are similarities between programmers tracing execution flows, lawyers reading an unfamiliar contract, and editors finding inconsistencies in short novels.



But what participants in this study were actually asked to do is:

> predict what action the code would produce

So "this code looks at the pathname, then some cookies, and then either calls the signin function or returns True" seems to align with that.



I'd put most of my code reading much closer to "inferring intention" than to "running in my head". It's really more like navigating through a complex building, looking for certain rooms than simulating execution. "running in my head" is to reading code like "pronouncing in my head" is to reading prose. Both aren't the norm.



Sometimes working out the details what is the right output of a piece of code (like, in the line x = f(15), is x 2345 or 2346?) is very time consuming

It's also sometimes unnecessary to understand the high level picture of what the code does (like "oh, this computes the GCD using Euclidean algorithm, I had to code this in uni once")

So, most people read in a higher level fashion and understand just fine. It's really like reading math. When you read 23443 + 54454 * 454534, do you need to work out the computation to understand what this does?



You can read code line by line and understand what each line does, but any algorithm that has more than 5 lines of code probably needs an additional step where, after reading it line by line, you need to take a step back and think about it in a different way to understand what the whole block of code is trying to do.



Maybe? I don’t need to read and calculate every iteration of some loop to understand that it’s mapping. While if I need to determine the actual output value of a function, I might.



> If you read code and don't think about what it's doing, are you really "reading" it?

A lot of code doesn't _do_ anything. And you can still read it.

Eg I'm thinking about type declarations, or constraint based programming, etc.



Compilers understand code pretty well, but they don't run it. (Similarly, if you are a human translating code from C to Pascal, you'll want to understand it, but you won't necessarily execute it by hand to get that understanding.)

How could you run code, when you don't know the inputs?

You can reason through how the code could run with some example inputs, but that's just one of many tools we have in our belt to comprehend code.

In eg Rust or Haskell just looking at the types is pretty useful. Or in linear programming, you might want to think about the constraints expressed, but not actually do any arithmetic yourself.



I’ve never seen a compiler that can answer the question: “what is the purpose of this function?” On the other hand, if I ask a human that question about a particular function and they are unable to answer it satisfactorily, I would have an extremely difficult time believing that they understand it.



Humans don't necessarily run the code step by step in their read to answer this question, either.

Quoting again the part that that prompted the discussion:

> While the programmers lay in a functional magnetic resonance (fMRI) scanner, the researchers showed them snippets of code and asked them to predict what action the code would produce

The question wasn't about whether the programmers understood the code but whether they can predict what the code will output.

I hypothesize that either the programmer recognizes the purpose of the code or they don't.

If they do recognize, their brain will probably engage with the code differently, in a more high level way, taking into account the bigger picture; but if they don't they probably need to execute it step by step to have any chance of predicting what they do, which is akin to doing a calculation.

It also depends on whether the code is a tangled mess or well structured; how descriptive are function names and variable names; whether the code has comments; etc.

And of course in real world code there is a mix of both modes of reasoning. And the point is, those two modes of reasoning might as well activate different regions of the brain. (or maybe not?)



> It also depends on whether the code is a tangled mess or well structured; how descriptive are function names and variable names; whether the code has comments; etc.

Well, it also depends on whether the suspect that the code has a bug.

Normally, we try to understand code by taking shortcuts through function names and structure, and fill the rest of in via pattern matching.

But if I suspect that a piece of code has a bug, I work hard to turn off these higher levels and concentrate on what's actually written.



Sure, but your human doesn't necessarily gain that sense of purpose by executing the function in their head. More likely, they are looking at the name of the function and its arguments and return values first, and work from there.

I wasn't arguing here that any existing compiler shows a human level understanding of code. Just that there are many ways to understand code, and executing in your head is but one of tools in your arsenal.



> Running it is equivalent to comprehension.

Interpreters run your code, but they don't 'understand' it any better than compilers, which don't run your code.

In a sense, compilers typically 'understand' your code better, allowing them to do more optimisations.

(The boundaries in practice are more blurry.)



Different, but not entirely different, both use the same kind of reasoning about program behavior. And the article is not about writing code, other than to the extent that it should be comprehensible by others.



This is exactly why I dread the future when people say "oh you will be able to tell the AI what to do, in normal language, and it will do it for you". I do not want to have 8h of "normal" text or speech conversations each and every day. It sounds exhausting. 8h of dealing with programming languages, though, that's fun!



> This is exactly why I dread the future when people say "oh you will be able to tell the AI what to do, in normal language, and it will do it for you"

If it ever comes to that, it should be close to pair programming with AI and it should allow a mix of AI generated and human written code.

Like unsafe in Rust, we should be able to say "I know what I'm doing here, stay out of my way". Then let AI verify human written parts for correctness by means of tests or any other methods.

I think we sort of do this already, but it is the humans who verify the AI generated code and I believe we are a long way from swapping the roles.



I'm more surprised that anyone with any experience in programming languages has actually thought this could be the case. Making programming languages superficially more like human languages is one of the classic rookie mistakes of programming language design. (See Perl's "do what I mean" paradigm and the chaos that resulted from it)

I think two important differences right there are that programming languages are more like "constructing" some sort of entity - a formula, an algorithm, a data structure, an architecture of different components, etc" than it is about communication. Consequently, a lot of "thought" when programming is about the implicit properties of that thing being constructed and less so about the syntactic specifics.

You can do the same in human language, e.g. if you write a recipe, a legal text, a mathematical definition, rules for a board game or instructions for an LLM, but human language can also be used for many modes of communication that have completely different purposes.

I think a comparison of brain activity between reading computer code and reading formalized human language - or between reading formalized and non-formalized human language would be more interesting.



Knowing and using a bunch of programming languages, and also knowing and using several human languages to various levels, I'm firmly in the camp which says that these are totally, utterly different things. Learning a programming language is just as learning any other particular skill (though it's better if you have a programmer's brain, or rather, interest), while learning a human language is.. just very different. And tremendously more difficult, and it doesn't help that it's mostly a semi-conscious business, in fact "studying" in the general sense gives very little return.



>> Making programming languages superficially more like human languages is one of the classic rookie mistakes

one of the least productive programmers I've worked with was obsessed with Groovy



I'm approaching 30 years of experience and I do read code like human language. The visualizations I'm using for data structures aren't really much different than visualizing a scene from a book.



From the linked article, they found that it activates the region of the brain that carefully reasons through unfamiliar problems. They also speculate that this might not generalize to people with 30 years of programming experience. Well, fine, a lot of academic research is done with students. But doesn’t that invalidate any comparisons with natural language processing, something that the test subjects do have decades of experience in?



Maybe compare to second year students of a very foreign language...

(Though I have to say, I'm a self-taught programmer with an academic background in theoretical linguistics as well as university-level experience of several languages quite different to my first language, and this result doesn't not surprise me. There are some obviously analogies to be made at some levels, I think the similarity between human languages and computer code is frequently overstated and a remnant from a more primitive linguistics that was quite narrowly focused on grammar.)



> I think the similarity between human languages and computer code is frequently overstated

It wouldn't even occur to me to think of them as belonging to the same category.



Occasionally you come across languages that do and, at least to me, those languages feel extremely difficult to deal with. For instance, I found AppleScript and HyperTalk to be awful to work with.

I also notice that some junior programmers I've worked with tend to argue for coding styles in mainstream languages that read almost like sentences. Not sure if this is a generational thing or an experience thing.



I’ve seen the same thing, it might be generational. Someone arguing that we need an isNotEqual(a, b) helper in addition to isEqual(a, b), because that’s supposedly “cleaner” and more readable than !isEqual(a, b). They were very vehement about it, but I don’t get that at any level. It’s more lines of code to write and maintain, more unit tests, more characters to type at the usage site and to me, reads exactly the same way.



My personal opinion is that 'not' (Python-style) should be the operator instead, simply because it's easier to miss a ! than a 'not'. With that said, I'm with you in the 'not isEqual(a, b)' camp.

The exception, imo, is functions that take boolean arguments. Languages should make it easy to show what they mean, and devs should take care to explain them.

sort(reverse=true, numeric=false) is much easier to read than sort(true, false).

(Silly example, it's meant to illustrate the general point, not to say anything about the signature of a sort() function.)



For me, reading isNotEqual() introduces an extra mental variable. I have to spend extra effort to keep track of the bar over the statement that doesn't exist in the text.

But that said, I see no reason that someone couldn't learn to parse notX as fluently as I parse !X. I suppose it's one of those fashions that change with the times. Probably a big reason that I stick to good old C a lot of the time.



What's old is new again - see SQL and COBOL. They certainly are easy to get the gist of from the words even if you don't know the language. (The business logic may be spaghetti in a COBOL program, but what's happening is clear.) I like it, and I remember SQL easier after disuse than the syntax of something else I've used equally sparsely like bash scripting.

Then on the opposite end of the spectrum wrt new things, we have Rust, which comes off to me as pathologically terse and inscrutable - certainly much more difficult for a newbie to parse than SQL.



I would be very curious to see what happens when the subjects are looking at a codebase they are familiar with. Even if there is some code within the repo they haven’t seen yet, the familiarity from style, convention, and knowing how the code fits into the rest of the program as a whole probably has a large impact as to how it is perceived.



What I understand from this is that it could be that with 30 years of programming experience, the region of the brain that carefully reasons through unfamiliar problems activates less.

That would make sense: if you are unfamiliar with programming, it makes sense that the unfamiliar problems zone activates.

It would be interesting to study this, to know if this zone activates because of the unfamiliarity or if it still activates with experience while dealing with code.

It could be that the zone will activate regardless (i.e. because when you are reading code, you are somehow always dealing with unfamiliar stuff - because this is why you are reading code in the first place usually) or not (i.e. because reading code is now familiar).

> But doesn’t that invalidate any comparisons with natural language processing, something that the test subjects do have decades of experience in?

It could: the test subjects would probably need experience in both natural language processing (which they have) and programming for the comparison to be fair. We most likely won't have "as familiar in both stuff", so we'll most certainly have to just do with "familiar enough with code".



Lex Fridman recently interviewed Edward Gibson, a linguist, and they talked a lot about how the brain processes language, including how someone reading code lights up a completely different part of the brain than language.

#426 https://lexfridman.com/podcast/

ETA: I believe Evelina Fedorenko, from this paper, is Edward's wife.



I believe it. My worst grade in primary school was french and I'm having such a hard time learning Chinese compared to my peers (taking evening online classes in community college) but programming has always come naturally to me. Completely different skillsets as far as I'm concerned.



> Completely different skillsets as far as I'm concerned.

My experience differs:

I find the way how programming is typically (also self-)taught much more fitting to my brain than how natural languages are (also self-)taught.

From my experience I do believe that if natural languages were taught in a different style, it would be much easier for a programmer brain to get good in them (I also consider it to be plausible that there might exist a way on how to teach programming languages that is more natural to "language-minded brains").



My worst grade in school was German. Then, after graduating and when working my third programming job, I've started learning it again and I realized I'm actually really good at human languages (learning words and reading came naturally to me). It's just a school environment which stopped me.



I run a YouTube channel that's mostly devoted to reading code: https://www.youtube.com/@ants_are_everywhere

I (try to) read code from popular open source projects. It's a bit like sight reading in music because the idea is that I go in blind and try to make sense of it.

This rings true to me:

> The researchers saw little to no response to code in the language regions of the brain. Instead, they found that the coding task mainly activated the so-called multiple demand network. This network, whose activity is spread throughout the frontal and parietal lobes of the brain, is typically recruited for tasks that require holding many pieces of information in mind at once, and is responsible for our ability to perform a wide variety of mental tasks.

There is language of course, but I'm not working hard to process the language so it wouldn't show up as an fMRI signal. I am often holding several things in mind at once, including things like what the code does, what the authors are trying to do, what the author's background might be, how similar software works etc.

I did study math, and it also doesn't really feel like reading or doing math.



And this is why getting interrupted is so annoying

It takes a while to “load up” all of those things in our minds, and it’s a bit like a balancing act, you need to keep balancing them to prevent the whole thing from crashing down

Then if you get interrupted, it is like loosing balance and things falling into a mess. Afterwards you need to sort of pick everything up again, which takes time and effort

More than a couple of those in a row are incredibly frustrating

Pg talks about this in his essay of the maker schedule vs the manager schedule: (2009) https://paulgraham.com/makersschedule.html



I had this a lot in my 20s, but I notice in my 30s that an interrupt won't derail me as much.

It's not that my brain is better at caching states, I think it's to do with being a better coder: meaningful comments, better proficiency in using the IDE, knowing when to not implement something from scratch,



Taking notes, taking notes, taking notes. Notepad as an external memory is just an amazing game changer for this. Doesn't even need to be deep, just a running tally of what you're thinking to get it to somewhere that you can reference it instead of running it on a loop in your short term memory. Makes fading in and out of it so much easier. I treat it like a conversation with myself half the time and the other half the time like I'm writing a reference book for someone else. Its worked well for fresh code, code to be, and analyzing existing code made by someone else. Cannot recommend it enough.



Note taking is great not just for recovering from distractions but also preventing them in the first place. Just jot down branching tasks that would've been a distraction and come back to them later instead of being the Hal fixing a lightbulb meme.



There's a fairly simple explanation for this. We've seen in studies that experts aren't fundamentally any better at remembering things. We just get better at chunking more and more abstract groups of information together. We can never increase the around 7 pieces of information we can hold in short-term memory. But we can increase what and how much those 7 pieces represent.



Likewise. Both code and math are almost physical objects in my mind. I hold these objects and relationships between them in my mind.

I find it interesting that for language I find reading easier than writing but for code or math it is the opposite. When reading code or math it almost feels the same as when I’m learning a foreign language and trying to read it. I have to “translate” it into the my internal “objects”.



Most code is interpreted as declarative regardless of how it's written. I've always interpreted a chunk of functionality as a block of logic. Which is why syntax structures like if/else feel do natural. But we don't really follow natural language the same way.



>>This network, whose activity is spread throughout the frontal and parietal lobes of the brain,

A (rare enough) person in chronic coma, regaining consciousness on a sedative-hypnotic drug zolpidem, which lasts for a couple of hours as long as the drug is active, can also show parietal lobes (and a bit of dorsal frontal lobes) activation (as showcased probably in Hamilton's Pharmacopoeia in a series on zolpidem).

Thus, according to modern neuroscience and brain imaging, reading code is basically the same as regaining consciousness on zolpidem while in coma.



> It's a bit like sight reading in music

I was wondering if there was any correlation in the ability to sight read music and to quickly understand code.

I played a few musical instruments as a child and teen but never got good at sight-reading.

I've heard that musical ability and programming ability are often correlated but I could never have made it as a working musician. I did make a good career out of programming though.



I've been curious about this too. I do play music. I've played maybe about 6 different instruments live in bands. But I was never any good at sight reading.

My guess is if I were to try it now I'd be a lot better at it than I used to be.

I do, however, rather consciously think of it as a form of improvisation. And I am much better at improvisation than I was at sight reading.



Not surprising at all, reading code is not like reading prose, not even doing math, is more like a puzzle building in the brain. There's a myriad things to mentally connect and substitute/translate in realtime in order to understand what the code is doing: each function, symbol, variable, statement, abstractions all the way down that makes reading code even more complex than writing it in the first place.



It looks different and has a very different syntax and meaning. Practically yhe only thing it has in common is that it's also called "a language."



Yes, and I think the argument could be made that code is the exact opposite of language, since it's a set of machine instructions that lay out what the computer will do and can't, or won't or shouldn't, be a metaphor for something else as happens in the domain of language.



To me, reading code is more like looking at one of those pictures of a set of gears and then being asked whether “gear A turns anticlockwise or clockwise”. At least this is how it feels. It’s more like studying a mechanism than understanding language.



I mostly work at home, and when I'm coding, I generally have music on. I like to sing along with music, and I sit at my desk singing and coding all day long.

But I could never do this if I was reading something. It's impossible to read something and sing something else at the same time. But it's perfectly easy to do while coding.



well, people are different. I feel like I have narrator voice for the code, not exactly the same way books do, but still something similar.

Also when I'm trying to read code I feel distracted and annoyed by any background noise where I can recognize words, like speech or music in a familiar language. And I can't talk while coding, it messes up what I'm typing. Also I really prefer english-like languages, like ruby, to more sigil or symbol loaded, like rust, or full of abbreviations and operators, like Haskell. But I'm sure other developers have other brain pathways activated.



I have a narrator voice when reading code. It's the same narrator voice as when reading technical reports and white papers. I do sometimes struggle to process code that doesn't read well, likely because the cadence of the narrator voice gets disturbed and doesn't flow well.

I've found that my descriptions and explanations of how things work generally follow the same cadence as this narrator voice, which tends to help me explain things succinctly and transfer knowledge quickly.

I do agree it's distinctly different from the narrator voice for fiction books. I would assume that's due to the presence/lack of emotive language between the two types of writing.

Perhaps this is a similar phenomenon to the inner monologue that some people have but others don't? Or the ability to imagine various levels of detail of objects without physically seeing them? The mind is a strange beast.



My narrator does tell me the following:

1. Are return values identical for identical arguments?

2. Are there side effects? What are those?

3. "Congratulations, this narrator here found a story that can be told in isolation!"

4. Are the variables used in close locality to their declaration

5. Are all these loops doing what I think they do? (This one requires iterations of temporary code rewrites)

6. Is the story told with high efficiency? Can some side effects be prevented?



Fun fact, when comprehending sign language the narrator voice is the same as when comprehending vocal language. By the same I mean that it works and "sounds" the same, but obviously it has the grammar and vocabulary of whatever sign language you're communicating in. The exact same parts of the brain are at work, too.

I think this implies that the narrator voice isn't really a "sound echo" but rather your comprehension at work. Just like when reading text.



I would say `narrator voice` in this context is distinctly different from internal dialogue. Obviously the internal voice can narrate to yourself, but that voice tends to be slightly different in my experience.

Narrator voice is a specific voice, cadence, pattern etc. that your internal voice takes on when reading a narrative, rather than the narration of your experience. For me, I have a different voice when reading/writing technical documentation compared to reading/writing fictional works, which both are different from my day to day internal dialogue.



Not to dismiss the study - I agree that hypotheses require proofs - but, based on my personal experience coding *feels* much more like math, and almost not at all a spoken language, except for the fact that we give identifiable "names" to things (keywords, variables, functions, etc.). Well, not unlike in math in fact.

I wonder if the authors (neuroscientists) discussed the premise with programmers and linguists to formulate the hypothesis. I mean, I would say that the question should have been: are the same regions of the brain activated when doing math/chemistry/physics/neuroscience(so meta!)/science and coding, instead of when talking/writing/reading and coding.

Maybe there are people who experience coding differently - please, let me know if so, but I would be so much surprised.



> based on my personal experience coding feels much more like math, and almost not at all a spoken language

To me, both spoken/written natural languages and programming languages feel a little bit like math. :-) (no joke!)

I thus often tend to analyze subtle aspects of words just like one does code audits for hidden backdoors (admittedly - native speakers of English agreed - this is easier and more yielding to do in German, my native language, than in English).



It would be interesting to look at programmers with very long (2-4) decades of experience solving programming tasks in different domains. Also, I'd be interested in seeing what happens when they write code for domains entirely new to them vs very familiar.



I think there is some nuance to it.The absolutely best thing for clean code for me has been taking courses in academic writing. But also why would it activate just the language centres of the brain - especially if you are asked to predict what a piece of code would do. I think comparing to say a planning task would be more akin...



I was just thinking today about an idea to introduce some programming constructs to legal documents like contracts and organization policies/procedures. The specific situational semantics inside the documents would still be natural language and meant for human decision makers to implement, but the control flow aspects would be rigidly formalized well enough that documents conforming to this standard could be automatically turned into flow charts. More generally, it would ensure the documents conforming to the standard are able to be "read like a machine" to a certain degree (if not outright machine readable), which has implications for the ability of regulators and judges to "debug"/"trace" situations, or to pro-actively identify "edge cases", or to make tooling which assists with analyzing the documents and related paper trails, irrespective of the particular organization where the matter came from.

So as a follow up to this research, I'm really interested to see, does the brain read legal/formal documents more similar to how it reads code, or more similar to how it reads natural language. Or is it somewhere in between? Does it change when we introduce some syntax constraints on the legal documents? How much rigid syntax does there need to be before the brain switches from one mode to the other?



> I was just thinking today about an idea to introduce some programming constructs to legal documents like contracts and organization policies/procedures.

In my late teens I saw a contract my dad had brought home, and that one at least already had some basic constructs - I saw analogues to variables, logic/conditionals (as used in a programming language rather than natural language), and possibly even subroutines.

And that's just what I remember, it wouldn't surprise me if there were already more such commonalities. It would certainly explain why legal documents are so difficult for the layperson.



To me, reading new code is like separating a bunch of string that's been tangled together. What thread goes where, what's it's purpose, how is it connected to the other threads? It's a very active process.

Reading language is different, I move my eyes left to right, top to bottom, to consume someone else's ideas. It's a much more passive process than reading code.



Fascinating. As someone who has written (and read) a lot of code and written (and read) a lot of legal language, and as someone who has consistently recommended that young people learn programming principles before going to law school, I'd be very curious to see if reading a complex contract prompted the same response.

I always heard that "you'll never read for pleasure again" after law school, but, if anything, I read for pleasure more now than I ever did, and my work "reading" feels like something completely removed from it.



If you are interested in this topic, I suggest watching the conversation between Edward Gibson and Lex Fridman. In the middle of the conversation [1] Edward talks about how there is a "human language comprehension network" within human brain, which gets activated only when we read or speak in any human languages, but nothing else. For example for those speaking multiple languages, reading or writing in any of those languages activates the network. But reading gibberish or computer code, neither activate the network.

[1] https://www.youtube.com/watch?v=F3Jd9GI6XqE&t=4906s



I am wondering if the domain model is not influencing the results.

If the code is describing a math function then it seems to me reading it will activate the same parts as math.

What if they will read a business logic that is about say users creating accounts?



I have a stupid question, does anything ever actually get replicated?

I mean; I have a graduate degree, and I’m proud of the work and I think it could very practically be replicated… but will it? Probably not.

If you claim to invent cold fusion probably, but for a study about coding brain I can’t see it being even useful to replicate.



I'd be curious to see some research like this done in children with dyslexia. Dyslexia seems to like things that more bilaterally use MD (I'm dyslexic so I often think about brain functioning in that context).



I could have told them this decades ago. I get migraines with an aura that starts in the middle of my vision and spreads to one quadrant. Years ago they were more commonly in the top right quadrant than they are now. With those, the chances were high that I'd get the symptoms of aphasia too. For about 20 minutes, I couldn't form a sentence or understand one. However, if I was working on embedded C, I could read, write and debug code without any issue. The comments, however, might as well have been in Greek.



I'd say coding is more like playing a game than reading. It is initially difficult but once the fundamentals are learnt, the activity becomes one of how to use the fundamentals in creative ways to achieve a goal.



People have always told me the best programmers/computer scientists are also great mathematicians and this seems to support that. Esp thinking about Turing, Von Neumann and Knuth



Academic/theorical computer scientists, totally.

But pragmatic programmers / software engineers, try reading the average PhD's python code. Usually it is a mess.

Knuth may be an exception, rather than the rule.



> “Understanding computer code seems to be its own thing. It’s not the same as language, and it’s not the same as math and logic,” says Anna Ivanova, an MIT graduate student and the lead author of the study.

I'm not sure if this is what they are implying but I find the statement "reading computer code is the same as reading language or math or logic" to be an odd null hypothesis.



Every one of these are languages and I'll add music. The purposes and constructed of these types of languages vary so should the thought processes.

Computer programs can be written in a proof/solver language which would get closer to math/logic. This puts things on a grayscale IMO.



Not reading but writing code, I have a similar feeling as playing the piano, especially in highly interactive programming languages like Clojure or Common Lisp. Does anyone have similar experiences?



Not me. Far from it, actually. Playing music feels very distinct from programming. However, mathematicians and computer scientists are more drawn to Bach than other people. There's some connection between music appreciation and programming.



I would think intuitively it is a different brain process.

I wonder if brain-wise reading code is similar to a judge reading through various laws to make a judgement?

Is it also similar to reading a math proof? Are all these things different?



I wonder if some coding languages are more akin to 'normal' languages than others. I found it interesting that the scratchJS activated more of the brain associated with spatial tasks which makes sense as I believe scratch is most often used for computer game-type apps?



When computers read programming languages, that also doesn't activate anything resembling language processing, so why would it require human language processing capabilities.



I can tell this is true because every now and then with enough syntactic sugar I write code that CAN be read almost like language, and reading it that way takes a shift of mind that feels weird. It’s like putting on a different set of goggles.



Yeah I feel like this is why fluent APIs sometimes irritate me. Like they can be clever, and can be natural:

do(thisThing).and(thisOtherThing).until(someStopcondition)....

but when this breaks down it becomes quite ugly.



I'm not sure LISP (which I learned while working (at Boeing Computer Services (which no longer exists))) is a good candidate (for a programming language (that would work like human languages (at least the ones I know))).



A really interesting finding considering how many comparisons of code to literature exist. I.e. the advice of laying your files out like a “newspaper”, writing your code in logical evolutions like an essay, etc…



I've never heard that advice about newspapers? Maybe you're thinking of literate programming.

Anyway I don't think it's a remotely interesting finding. Anyone that has done any programming knows that it is not like natural language. We even explicitly know that from the complete failure of languages that try to be programming languages and natural languages like AppleTalk.

The fact that you use English words in your identifiers and comments in your code doesn't make it English any more than writing a paper about maths and using `m` for mass means maths is linguistic.



I’m not thinking of literate programming, no. I’m familiar with that. I’m referring to a fairly well-known recommendation from Clean Code.

https://www.baeldung.com/cs/clean-code-formatting#vertical-f...

I don’t agree that “anyone who has done any programming” knows it’s not the same. The number of comments to the contrary in this thread is evidence enough. The parallels between natural and programmatic languages are not just undeniable but often useful, and how the human brain models its understanding of code is orthogonal to that utility.



Programming is more spatial than linguistic. It requires temporal understanding in order to formulate, and usually data 'structures' are based around physically laying data out.



Stands to reason. The brain has to constantly resolve ambiguity when reading natural languages, while artificial languages are (usually) designed to be free of ambiguity.



"Language" in the context of PL is a misnomer. People even scientist seem to fall into that trap (remember the literate programming movement). PL's are better described as "Specifications". More similar to file format specifications, later converted to executables or interpreted.

> the researchers showed them snippets of code and asked them to predict what action the code would produce.

That is only (a small) part of programming (the simulating in your head). I'd argue that biggest part of programming is organizing and communication.



I disagree.

Wikipedia's definition [1] is what I had in mind for language:

> Language is a structured system of communication that consists of grammar and vocabulary

Programming languages are structured, have grammars (often described formally using a grammar definition for some parser generator) and vocabulary (the set of functions and types you can use, possibly modules and package, basically), and they are used to communicate (with the machines, and with fellow developers).

The "file format specification" would be a (subset of) the specification of the language, not the language itself, and yes, a file format specification can describe a language (but I believe we only say this for stuff intended to be read and written by a human - so ZIP would not be a language, but HTML would be).

I do agree that programming languages and human languages are not the same thing, they have different characteristics, traits, use cases, behavior… and that's why this study compares them. And that's also why we need to make the distinction. And they also have common stuff, hence the same word. But even if you disagree that they have a relevant amount of common stuff, polysemy is a thing, so it's still "fine" to call English a language and C a language. Quotes around fine because I'm actually only describing what happens: we do call them both "language", that's a fact.

> I'd argue that biggest part of programming is

It might be, but the paper studies and focuses on programming languages, not the more general practice of programming (which is worth studying, but studies always have a (restricted) scope).

> remember the literate programming movement

Where's the confusion in / what's wrong this? Some think that it's better to narrate in code and we call that literate programming. I don't share the vision, but everything looks clear to me.

[1] https://en.wikipedia.org/wiki/Language



It’s more than a coincidental resemblance. Programming languages were designed using constructs from linguistics (context-free grammars, parsers) that were originally developed to describe natural languages.



Who are you "communicating" with? A programmer does not communicate with a computer, through a PL. He or she writes it down, to later be executed or interpreted, so the computer can "communicate" with the user, through the User interface (not the PL).

Also, the vocabulary works very different, in a PL almost all of the vocabulary is defined "at hoc", as you go.

I mean I don't have a problem calling them a language, people should just be vary not to use language ideas for PL.



> Who are you "communicating" with?

I wrote this: "with the machines, and with fellow developers"

> A programmer does not communicate with a computer, through a PL

I disagree with this. The programmer describes the steps that the computer needs to perform, to the computer. Or the programmer describes the state which the computer should reach.

The communication is one way (except, arguably, during debugging sessions), but a communication does not need to be two ways.

> He or she writes it down, to later be executed or interpreted, so the computer can "communicate" with the user, through the User interface (not the PL).

You seem to set yourself in the end user perspective, but the developer is also a human, and I'm in the developer's perspective here.

> people should just be vary not to use language ideas for PL.

Agreed, some / many ideas applying to natural languages don't apply to programming languages. But some do.

> the vocabulary works very different

Absolutely. There are strong differences.

(of course, it's fine we don't agree, we are arguing on definitions)



Even if you consider it communication, that doesn't make it a language. Plenty if communication happens through other means.

If we look at the debug session or bash scripting, the closest thing of a two way communication. It doesn't "answer" in code.

I see code closer to something like a Floor Plan an architect may use or a CAD drawing from an engineer. Both of them also communicate, but they are not considered languages. They also have a rules (grammer?) and patterns (vocabulary?).



> Even if you consider it communication, that doesn't make it a language

In my first comment, I have broken down the definition of language I found and (supposedly) successfully applied each part of it to programming languages (communication being only one of the features).

Where is my mistake?

Either the definition is wrong, or incomplete, or one of the matches I made is erroneous, or I missed a match.

> an architect may use or a CAD drawing from an engineer. Both of them also communicate, but they are not considered languages. They also have a rules (grammer?) and patterns (vocabulary?).

Tough one, maybe those drawings are indeed expressed in a language, just that we don't usually see it as a language / think of it like this. That would apply to map symbols as well [1].

edit: Just noticed French Wikipedia [2] actually cites cartography as a language without a syntax, so that would check out:

> la cartographie est un exemple de langage non syntaxique

("Cartography is an example of non-syntactical language")

French Wikipedia broadly has the same definition for language as English Wikipedia, but makes the grammar part optional.

We could argue that the examples you cited are indeed not languages using the English definition because they don't have grammar (if rules you mentions are not grammar - and indeed you would probably not need a pass of syntactical analysis to "parse" a CAD drawing)

[1] https://en.wikipedia.org/wiki/Map_symbol

[2] https://fr.wikipedia.org/wiki/Langage



A language is ... does not mean ... is a language. A dog is an animal but an animal is not always a dog.

I think it's clear for the both of us that PLs are not the same as languages. I would just go a little further and calling it a misnomer. For me that is a helpful framing, especially once you go into visual programming.



> A dog is an animal but an animal is not always a dog.

In this sentence, "is" describes membership (a bit like isinstance in Python, or instanceof in Java or JS). In a definition, this is not the same "is".

In a definition, "is" is the definitional "is", and what is after the "is" is supposed to be a comprehensive list of things you need to match to be called the term preceding the "is" (a bit like "class" or "struct" when you define a type)

Given the definition "A dog is a domesticated animal that has four legs and barks", everything that is an animal, domesticated, that has four legs and that barks is indeed a dog, and something that misses even one characteristic is not a dog (though I made up the definition, it is likely wrong).

Again, polysemy. Not all uses of be are equal. So:

> A language is ... does not mean ... is a language

Indeed, though not for the reasons you had in mind when "A language is ..." is a definition of language: both things around your "does not mean" are indeed true (when you were probably hinting at the second part being false, given your dog example), but the first "is" is definitional, and the second is membership. Both are true because the definition actually defines membership for each individual object described by the definition, and by putting the list of requirements of the definition at the beginning of the sentence, you are describing a particular item that trivially matches the definition. Almost like I did in my example (except I used "everything that", instead of "a").

> I think it's clear for the both of us that PLs are not the same as languages

As natural languages, yes. Which, granted, we would call just languages in everyday life.

> visual programming

How is this related?

I respect your opinion that language in programming language is a misnomer but have not been convinced so far and the more you are writing, the more I'm convinced that your internal definition of language is just too restrictive and I'm also not seeing how this restriction/framing actually helps.

I'm a simple person. You give me a definition of A, you give me something S, if S has all the characteristics given in the definition, S is definitely a A according to this definition.

Now, we can discuss the definition and how it's not entirely (or at all) satisfying, complete, appropriate or useful, and how it could be improved, or how the trick of using qualifiers to further restrict the definition does not cut it.

On this last point, I, so far, have not felt the need to change the definition of language I know to express my ideas clearly, and have been happy to use qualifiers like "natural" or "programming" when needed and unclear from the context. In particular, I have not encountered a situation where calling a programming language a language lead to confusion or bad consequences.



> How is this related?

That is sort of my point, if you think of a PL as a language, visual programming is not related. You are drawing a line somewhere between PL and visual programming. If you think of a PL as a specification, it is the same thing just presented differently.



You can see the vocabulary and the rules of a language as a specification (for the set of productions of the language) if you wish. A language is a specification (although the reverse might not be true).

Visual programming certainly deals with a (visual) programming language, which is still a language. A language doesn't need to be textual. It only needs grammar rules and vocabulary. For instance, taking the example from [1]:

- the "set" block and the "for" block are elements ("words") of the vocabulary of "Snap!"

- 'an identifier i needs to appear in a "set" block before appearing in a "for" block' is a grammar rule of "Snap!" (I would guess)

- 'a "set" block can appear in the body of a "for" block' is also a grammar rule of "Snap!"

You can even build a 1:1 mapping between VL and TL, where VL is the visual language and TL is an equivalent textual language, so the notions are essentially the same.

For me, where the notion of language can be criticized is that for natural languages, the grammar rules and the vocabulary set are ever changing and not always well defined. The rules + vocabulary thing is a model at best.

[1] https://en.wikipedia.org/wiki/Visual_programming_language



Interesting! You seem take "language" really far / different.

How about something like editing an image in Photoshop? There's a vocabulary (the commands) they get applied to the image. There are also rules (constraints). One could argue using photoshop is communicating with a computer, what to do with an image.

So while I say language is a misnomer for PL, you say it's a misnomer for natural languages. ;-)



I guess you could argue for it. Command resize can only be applied after something is selected, and not deselected in the meantime. You might be able to describe the Picture Manipulation Commands language using a finite automaton, and if not, using a grammar.

My views on this are highly shaped by my university lessons on formal languages [1], which I also taught a bit teeny bit. You would have text in mind, but the mathematical concepts are actually more generic than this. One of the exercises was modeling a PIN electronic door lock with a finite state machine (how to unlock without unlocking too early?).

These formalisms look like computer science stuff but are actually directly taken from Chomsky's work [2], a (bio)linguist who was/is actually concerned with human languages (which another commenter implicitly mentioned [3])

I guess this stuff is probably valuable to study natural languages, given the correct scope, since it was made for it (xD). It sure is useful in computer science in any case.

One direct result of his work is the (in)famous stack overflow "You can't parse HTML with a regex" post [4] (since regexes can only define regular languages, and HTML is not a regular language - although I believe the answer is a bit wrong, you can find a regex to parse the regular subset asked in the question, though you need to watch for cdata sections (in XHTML) and script and style tags (in HTML), at which point maybe just use a proper HTML parser unless you are dealing with well known content with no such exceptions)

[1] https://en.wikipedia.org/wiki/Formal_language

[2] https://en.wikipedia.org/wiki/Linguistics_of_Noam_Chomsky

[3] https://news.ycombinator.com/item?id=40480913#40483454

[4] https://stackoverflow.com/questions/1732348/regex-match-open...



> people should just be vary not to use language ideas for PL.

(s/vary/wary/, I suspect.)

Why not? I thought of crafting a system as telling a story about what the system was going to do, whether or not it involved an interaction with users. The syntax and structure is extremely precise compared to literature, but if it doesn't tell the story clearly, it's less likely to be a successful effort, both in use and for maintainers.



To me, the biggest dividing line between mediocre and good programmers is who they write code for.

The mediocre ones write their code for the computer/compiler/interpreter. When it works, they're done.

The good ones write their code for human readers to understand, including their future self.



I find removing the organizing and communication layers greater increases your velocity. For each idea we need to communicate to others or ourselves (in a linear language) involves a translating process. Removing the need for that layer leaves you in the coding zone. Once you have to leave for translating it becomes an expensive process for your brain to recreate the verbal / non verbal environments (or takes too much energy to hold both plus the translation path).

I have a theory that burnout is more common when you keep shifting mental states. Developers who have a phone that keeps ringing throughout the day (or slack messages or urgent emails) are burned out at the end of the day more often than those allowed to be in one state.



I'm working in a library that was written by someone who no longer works at the company who thought this way.

It's an absolute nightmare to understand. He may have been in the "coding zone," but he chose variable names that make no sense, abstractions that don't map on to any actual entities, and left no coherent documentation.

I think the organization and communication layers are absolutely critical, and any apparent velocity tradeoff is just accumulating technical debt, because there's probably an entire team who is perpetually on the hook for supporting what you wrote, and actual long-term velocity over time depends on their ability to understand it.



I think communication layers are critical as well. On a team they are more important than the coding and that's why you hire someone to lead and organize everything.

Removing the large team removes those layers and it's why a two person team can do the work of 12.

Adding in documentation, training materials and providing support can happen at the end of the development cycle freeing you to support the project 100%.

It's not that you remove communication or documentation but do it at select intervals. Don't do a full scan of a table everytime you need a value found in the index.



Yes, obviously there is some conceptual similarity to written language, but the goals are really quite different. Human language is used to communicate, which involves theory of mind and modelling the listener. Programming language is just specifying a computation to be performed.

Still, it's interesting to consider why the "language areas" of the brain apparently are not used for reading or writing computer language (or for other different types of mental activity for that matter).

Some processing areas of the brain are obviously dictated by connectivity and adjacency (visual pathway connects to visual cortex, auditory pathways connect to auditory cortex), but reading of both prose and computer code comes in visually, so why does one get processed by Wernicke's area, and one not?! The best I can guess is that multiple areas of association cortex might be competing to recognize/predict visual inputs, and those areas best equipped to do so will win and perhaps suppress the others.

If this suggestion is true - that via competition the best suited area of brain gets used, then it begs the question what are the key differences, pertinent to the brain, between written prose and computer code that cause this differentiated processing?



I tried asking both Claude Sonnet and GPT-4 (Bing CoPilot) about this, and they referred to the brain using "selective activation" to only enable language areas for language inputs, but both talked in circles and couldn't explain how this "selective activation" actually works - how does the brain (starting from nothing - no knowledge of written language) start to selectively activate Wernicke's area only for language? Its more understandable how processing gets consolidated there, and suppressed elsewhere, once it gets started!



I agree - I think it is pretty clear that there is a difference between programming languages and other languages. Figuring out the differences (instead of getting wrong ideas because they share some similarities) should lead to better words for different kinds of languages.

Maybe "Spoken languages" is a better term for languages like English? I find speaking computer code out loud just doesn't work except for some particular lines of code. "At dataclass class QueueItem magnitude colon int equals one..." is really not clear, if I have to communicate such code in words I wouldn't read it directly and instead describe it, e. g. "QueueItem is a dataclass with the attributes...").



I suppose the term language in the context of computing is widely accepted because it shares many properties with human language. Hence the common taxonomy in which both natural (human) languages and formal (programming) languages are different types of languages.



Programming languages have a syntax, a semantics, and a more or less formal grammar. They have an alphabet, and morphemes in the form of tokens.

They're a notation, yes, but one with a precisely linguistic form. Language is an appropriate word to use here.



You are correct, it's more properly thought of as a specification. However it's a bit more than a misnomer, it's a kind of religion or cult with roots going back decades.



There will be a comeback of literate programming. Tools were not advanced enough for it to be really practical. CS was not mature enough as a field for LP to flourish.



I was not referring to TeX and PDF files. Donald Knuth is the creator of Literate Programming and Tex, but I personally consider this toolchain to belong to the stone age right now.

I was referring to:

1) Org mode and Emacs. LP was introduced as a concept in 1984, and org-mode was created 2007 or close to that. 20+ years later till we finally got a good formatting option for text + code. One can export to PDF or HTML, but i see no reason why not read the code directly in org-mode. It is better actually.

2) A good programming language with types. That's Rust and it was created in 2015.

3) LLM's to automatically generate descriptions of code. Type annotations are one of the most important hints for LLM's to generate accurate descriptions. Untyped or weakly typed code, like, Emacs Lisp, Python etc, does not help LLM's to be as good as possible.

4) A good LLM which is accurate enough, plus cheap enough and fast. That's Llama3-8b on Groq, in which they provide very fast access to Llama3.

I am currently researching how i can take a Rust project 100000 lines, and transform it to a Literate Programming org document. I have created a small program to cut Rust source code to chunks in an intelligent way [1]. Now i am trying to figure out, how to provide structure to the LLM, by extracting type signatures with rustdoc and feed it as context to the LLM with each query.

It will take some months before i have something good, but based on my calculations, an 100000 line program will be transformed to a giant org literate programming document, in an hour, costing close to, 50 cents.

Point being that LP will make a comeback, it's gonna be quick and it's gonna be cheap.

[1]https://github.com/pramatias/documentdf



> However it's a bit more than a misnomer, it's a kind of religion or cult with roots going back decades.

What do you mean? Formalisms to study languages (natural or programming) are tools.



from my own exp of rebuilding my mind map after getting control of my adhd through nootropics it seems to be writing and explaining it to someone else forces my brain to fine tune the meta-language mind map of coding it has.



> rebuilding my mind map

and

> to fine tune the meta-language mind map of coding it has

I have difficulties to understand / unpack these parts of your sentence, what do you mean?



This definitely feels intuitively correct, at least for how my brain works.

When I'm reading language (prose), so much is happening -- what is the writer's overarching point and how does this fit in, parsing tons of ambiguity (a word can have lots of meanings), picking up on the writer's "style" (short and to the point? verbose and flowery?), sometimes enjoying the rhythm and even "musicality" of a sentence, visually imagining a "picture" if it's literature, and so forth.

When I'm reading code, it feels much more like following Lego set instructions, or a math proof. And I'm going slowly. It absolutely feels like I'm using a completely different part of my brain from when I'm reading.

Yes there's a little bit of overlap -- variables can use long descriptive names, and I might be reading a non-fiction article about economics that uses some of they math-y concept-building I use in reading code. But that's about it.

Technically both have a "grammar", but they written language grammar has a flexibility and artistry and ambiguity about it, while computer grammar is rigid and logical and unyielding. Interpreting them feels like they use very different parts of the brain indeed.



We have come to see with LLM's that language can be generated by using math.

Would love to see if this is the same when reading different patterns of code - functional vs object oriented. Where do product specs or technical specs lie?

All of this feels more pseudo-science to me - the distinction between language and math is not well understood at the moment.



Curious.

I know 2 dyslexics who coded for a living. Both reported troubles reading code same as literature.

I got much better about putting white space to spread things out from working with them.



This feels intuitive to me. At least for programming languages I know well "reading" code is more like looking at it from a structural point of view. Like reviewing a diagram or flowchart.

I don't usually read code from front to back like you do with natural language.



Might not be the same, but reading computer code and Ancient Greek or Latin are remarkably similar skills.

I could be convinced that parsing a language at that resolution isn’t how language processing works for general use, but that’s like saying diagraming a sentence isn’t speaking a language. Technically true but missing the point.



Agree, something i noticed also, is that people who make PLs, like Guido van Rossum (Python), Rich Hickey (Clojure), Guy Steele (Common Lisp), Stroustrup (C++) and many others, they use natural language expertly.

They express their thoughts in a simple way yet accurate, they are not getting confused, they do not use a million "you know" and "like" hand waiving and getting lost in the weeds.

Stating that PLs are not absolutely the same as natural languages is true, but no association at all? Or very little association? Difficult to believe.



There is a study i read a while ago which basically said fMRI pattern comparison studies may be suspect because of what fMRI actually does. "Lighting up" may not mean much, at least not in the sense of how papers that do this comparison. It is a shame that I can't locate it. There is probably a book on this topic as well.



> It’s not the same as language, and it’s not the same as math and logic

Not the same as logic? What?? Code is literally logic itself. Code that works with all supported parameters is logically correct code by definition. It's often impossible to prove exhaustively but it can be inferred with very high confidence in many cases.

Producing correct code is the goal of all good developers. Just like producing correct equations is the goal of all good mathematicians.



You missed a paragraph that would help out your understanding of what they're discussing and studying and remove the need to spam the '?' key in mock outrage:

> Previous studies have shown that math and logic problems seem to rely mainly on the multiple demand regions in the left hemisphere, while tasks that involve spatial navigation activate the right hemisphere more than the left. Working with Marina Bers, a professor of child study and human development at Tufts University, the MIT team found that reading computer code appears to activate both the left and right sides of the multiple demand network, and ScratchJr activated the right side slightly more than the left.

She didn't say anything about what code is in relation to math and logic, she was speaking about how the activated regions in the brain are different between the two tasks. And per the study they are different, or, as she said, "not the same".



This has been obvious to know who programs and speaks more than one language fluently. There was a government of an anglosaxon country trying to replace a language subject with programming arguing that programming is a language so kids should be able to choose between foreign language or programming



Duh, not even reading math is the same as language.

That, I feel, is the greatest barrier for math teaching in our education system.

The need to compress everything into a line or two, supposedly for ease of transcription and speed.

Hopefully VR and AR can bring about a novel way to teach math which is much more visual/extensive and less about signs and the need to compress



While names reveal intention I do find it easier to reason about code when everything has one letter variables. If I'm trying to think about an algorithm or just some complex logic, the shorter the names, the easier I can manipulate things and reason about them I have found



What I am trying to convey is that presently your way is the only way, whereas we deserve something to enable everybody to contribute.

Much like some people get their best intuitions under the shower whereas others do it while walking, other still have them while listening to music...



At the risk of just being flippant, duh. Language is a means for expressing thoughts, feelings, emotions, and data. Code is a means for instructing a machine how to behave. One encapsulates as much of the experience of being a human as we've been able to cram into it, and the other is a shopping list. They are not the same.



There's quite some things that seem to only/best be expressed as code or mathematical formulas, rather than as natural language. (things like cellular automata come to mind)



It suggests that the language center may not be sufficient for all forms of understanding. I guess there's a reason why we have the rest of our brains as well!

(Possibly the above might seem too glib at first. So, reasoning further...: )

But this is confusing, since natural language should -by necessity- also be Turing complete and thus be able to express anything expressible in the known universe. https://en.wikipedia.org/wiki/Turing_completeness

Would that mean that some form of Sapir-Whorf would apply, where in the case of some languages, a completely different part of the brain is activated even? https://en.wikipedia.org/wiki/Linguistic_relativity .

I think -intuitively- the best way to see it is some amalgamation of weak-form Sapir-Whorf and Turing equivalence should apply: Not all data structures are equally good at expressing the same things: Some are better than others, depending on the domain.

联系我们 contact @ memedata.com