![]() |
|
![]() |
| Both programs and math proofs are allowed to be written in story mode where definitions can come after their usages. Unless you write in C. There's no story mode in C :-) |
![]() |
| > “Story” mode is why I like to organize my code top-down, often with definitions appearing after their usages.
Yes, it drives me nuts that e.g. Python makes you put the definitions before their use! |
![]() |
| That's part of why Knuth came up with Literate Programming: he wanted to be able to present the reader of his code with a coherent order that makes sense to understand the project.
Given the strict formal requirements on order of languages like Pascal, Knuth had to come up with a clever preprocessing scheme to satisfy both the reader and the compiler. (See his tool, https://en.wikipedia.org/wiki/Web_(programming_system) ) Modern languages are a bit more flexible, but can still benefit from some re-ordering. > The worst is when you grep the code for "main" in an attempt to at least come to your own idea of order, and it turns out that a bunch of files have their own main functions for various unit tests in files the build system just happens to ignore :/. I used to use tools like grep a lot, but I found that my productivity really improved a lot when I set up an environment that supports 'jump to definition' and 'jump to references' etc. |
![]() |
| If you read code and don't think about what it's doing, are you really "reading" it?
Reading code without running it in your head feels like reading prose without actually comprehending it. |
![]() |
| Maybe? I don’t need to read and calculate every iteration of some loop to understand that it’s mapping. While if I need to determine the actual output value of a function, I might. |
![]() |
| >> Making programming languages superficially more like human languages is one of the classic rookie mistakes
one of the least productive programmers I've worked with was obsessed with Groovy |
![]() |
| I'm approaching 30 years of experience and I do read code like human language. The visualizations I'm using for data structures aren't really much different than visualizing a scene from a book. |
![]() |
| > I think the similarity between human languages and computer code is frequently overstated
It wouldn't even occur to me to think of them as belonging to the same category. |
![]() |
| Lex Fridman recently interviewed Edward Gibson, a linguist, and they talked a lot about how the brain processes language, including how someone reading code lights up a completely different part of the brain than language.
#426 https://lexfridman.com/podcast/ ETA: I believe Evelina Fedorenko, from this paper, is Edward's wife. |
![]() |
| I run a YouTube channel that's mostly devoted to reading code: https://www.youtube.com/@ants_are_everywhere
I (try to) read code from popular open source projects. It's a bit like sight reading in music because the idea is that I go in blind and try to make sense of it. This rings true to me: > The researchers saw little to no response to code in the language regions of the brain. Instead, they found that the coding task mainly activated the so-called multiple demand network. This network, whose activity is spread throughout the frontal and parietal lobes of the brain, is typically recruited for tasks that require holding many pieces of information in mind at once, and is responsible for our ability to perform a wide variety of mental tasks. There is language of course, but I'm not working hard to process the language so it wouldn't show up as an fMRI signal. I am often holding several things in mind at once, including things like what the code does, what the authors are trying to do, what the author's background might be, how similar software works etc. I did study math, and it also doesn't really feel like reading or doing math. |
![]() |
| And this is why getting interrupted is so annoying
It takes a while to “load up” all of those things in our minds, and it’s a bit like a balancing act, you need to keep balancing them to prevent the whole thing from crashing down Then if you get interrupted, it is like loosing balance and things falling into a mess. Afterwards you need to sort of pick everything up again, which takes time and effort More than a couple of those in a row are incredibly frustrating Pg talks about this in his essay of the maker schedule vs the manager schedule: (2009) https://paulgraham.com/makersschedule.html |
![]() |
| It looks different and has a very different syntax and meaning. Practically yhe only thing it has in common is that it's also called "a language." |
![]() |
| People have always told me the best programmers/computer scientists are also great mathematicians and this seems to support that. Esp thinking about Turing, Von Neumann and Knuth |
![]() |
| Not reading but writing code, I have a similar feeling as playing the piano, especially in highly interactive programming languages like Clojure or Common Lisp. Does anyone have similar experiences? |
![]() |
| When computers read programming languages, that also doesn't activate anything resembling language processing, so why would it require human language processing capabilities. |
![]() |
| I’m not thinking of literate programming, no. I’m familiar with that. I’m referring to a fairly well-known recommendation from Clean Code.
https://www.baeldung.com/cs/clean-code-formatting#vertical-f... I don’t agree that “anyone who has done any programming” knows it’s not the same. The number of comments to the contrary in this thread is evidence enough. The parallels between natural and programmatic languages are not just undeniable but often useful, and how the human brain models its understanding of code is orthogonal to that utility. |
![]() |
| Programming is more spatial than linguistic. It requires temporal understanding in order to formulate, and usually data 'structures' are based around physically laying data out. |
![]() |
| Stands to reason. The brain has to constantly resolve ambiguity when reading natural languages, while artificial languages are (usually) designed to be free of ambiguity. |
![]() |
| You can see the vocabulary and the rules of a language as a specification (for the set of productions of the language) if you wish. A language is a specification (although the reverse might not be true).
Visual programming certainly deals with a (visual) programming language, which is still a language. A language doesn't need to be textual. It only needs grammar rules and vocabulary. For instance, taking the example from [1]: - the "set" block and the "for" block are elements ("words") of the vocabulary of "Snap!" - 'an identifier i needs to appear in a "set" block before appearing in a "for" block' is a grammar rule of "Snap!" (I would guess) - 'a "set" block can appear in the body of a "for" block' is also a grammar rule of "Snap!" You can even build a 1:1 mapping between VL and TL, where VL is the visual language and TL is an equivalent textual language, so the notions are essentially the same. For me, where the notion of language can be criticized is that for natural languages, the grammar rules and the vocabulary set are ever changing and not always well defined. The rules + vocabulary thing is a model at best. [1] https://en.wikipedia.org/wiki/Visual_programming_language |
![]() |
| I guess you could argue for it. Command resize can only be applied after something is selected, and not deselected in the meantime. You might be able to describe the Picture Manipulation Commands language using a finite automaton, and if not, using a grammar.
My views on this are highly shaped by my university lessons on formal languages [1], which I also taught a bit teeny bit. You would have text in mind, but the mathematical concepts are actually more generic than this. One of the exercises was modeling a PIN electronic door lock with a finite state machine (how to unlock without unlocking too early?). These formalisms look like computer science stuff but are actually directly taken from Chomsky's work [2], a (bio)linguist who was/is actually concerned with human languages (which another commenter implicitly mentioned [3]) I guess this stuff is probably valuable to study natural languages, given the correct scope, since it was made for it (xD). It sure is useful in computer science in any case. One direct result of his work is the (in)famous stack overflow "You can't parse HTML with a regex" post [4] (since regexes can only define regular languages, and HTML is not a regular language - although I believe the answer is a bit wrong, you can find a regex to parse the regular subset asked in the question, though you need to watch for cdata sections (in XHTML) and script and style tags (in HTML), at which point maybe just use a proper HTML parser unless you are dealing with well known content with no such exceptions) [1] https://en.wikipedia.org/wiki/Formal_language [2] https://en.wikipedia.org/wiki/Linguistics_of_Noam_Chomsky [3] https://news.ycombinator.com/item?id=40480913#40483454 [4] https://stackoverflow.com/questions/1732348/regex-match-open... |
![]() |
| You are correct, it's more properly thought of as a specification. However it's a bit more than a misnomer, it's a kind of religion or cult with roots going back decades. |
![]() |
| There will be a comeback of literate programming. Tools were not advanced enough for it to be really practical. CS was not mature enough as a field for LP to flourish. |
![]() |
| > However it's a bit more than a misnomer, it's a kind of religion or cult with roots going back decades.
What do you mean? Formalisms to study languages (natural or programming) are tools. |
![]() |
| > rebuilding my mind map
and > to fine tune the meta-language mind map of coding it has I have difficulties to understand / unpack these parts of your sentence, what do you mean? |
![]() |
| Curious.
I know 2 dyslexics who coded for a living. Both reported troubles reading code same as literature. I got much better about putting white space to spread things out from working with them. |
![]() |
| There's quite some things that seem to only/best be expressed as code or mathematical formulas, rather than as natural language. (things like cellular automata come to mind) |
![]() |
| It suggests that the language center may not be sufficient for all forms of understanding. I guess there's a reason why we have the rest of our brains as well!
(Possibly the above might seem too glib at first. So, reasoning further...: ) But this is confusing, since natural language should -by necessity- also be Turing complete and thus be able to express anything expressible in the known universe. https://en.wikipedia.org/wiki/Turing_completeness Would that mean that some form of Sapir-Whorf would apply, where in the case of some languages, a completely different part of the brain is activated even? https://en.wikipedia.org/wiki/Linguistic_relativity . I think -intuitively- the best way to see it is some amalgamation of weak-form Sapir-Whorf and Turing equivalence should apply: Not all data structures are equally good at expressing the same things: Some are better than others, depending on the domain. |
I don’t think this is about understand what is written, it’s about figuring out what that which is written means.