(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40937119

你好! 您想讨论最近一篇题为“为什么程序员需要视觉效果”的文章吗? 当然,让我们深入了解一下。 从标题来看,我认为这篇文章探讨了在编程中使用视觉表示的好处和重要性,主要旨在提高程序员的生产力和对软件系统的理解。 文章认为,可视化编程可以在软件系统的理解和协作方面提供显着的优势。 让我们来看看这篇文章中涉及的一些要点。 首先,作者声称传统的文本表示(例如源代码或伪代码)由于其线性性质而缺乏提供复杂计算过程的全面概述的能力。 相比之下,可视化编程使程序员能够以整体方式掌握组件之间的交互。 其次,本文重点介绍了可视化可以增强协作的几个领域,例如共享设计和原型、讨论变更和审查决策。 协作对于开发可扩展的软件系统至关重要,特别是考虑到组件之间复杂的依赖关系。 第三,作者指出了代码美观和约定的重要性,认为可视化工具通过促进清晰的组织、模块化和减少重复工作有助于提高代码质量。 干净、组织良好的代码可以减少错误并促进软件系统的维护和发展。 最后,作者提供了一些实际示例,展示了可视化技术在软件工程中的应用。 示例包括状态机图、泳道图和用于理解软件组件的结构和行为的可视化建模工具。 总体而言,作者认为,结合可视化编程实践可以提高编程的效率和效果,特别是在处理复杂的软件系统或多利益相关者协作时。 他们挑战读者考虑采用可视化工具来开发、维护和呈现软件设计,承认可视化编程需要额外的投资,但可能会产生丰厚的回报。 现在,我提出一个问题供大家讨论。 您在项目中使用过可视化编程工具吗? 如果是,与仅使用文本表示相比,您注意到什么差异? 如果没有,您有兴趣尝试可视化编程技术吗? 分享您的想法和经验。

相关文章

原文


I think we need to differentiate: Visualize a program vs. Visually program.

This post seems to still focus the former while an earlier HN post on Scoped Propagators https://news.ycombinator.com/item?id=40916193 showed what's possible with the latter. It specifically showed what's possible when programming with graphs.

Bret Victor might argue visualizing a program is still "drawing dead fish".

The power of visual programming is diminished if the programmer aims to produce source-code as the final medium and only use visualization on top of language. It would be much more interesting to investigate "visual first" programming where the programmer aims to author, and more importantly think, primarily in the visual medium.



I think there's a very important real-world nuance here.

What you want with a programming language is to handle granular logic in a very explicit way (business requirements, precise calculations, etc.). What this article posits, and what I agree with, is that existing languages offer a more concise way of doing that.

If I wanted to program in a visual way, I'd probably still want / need the ability to do specific operations using a written artifact (language, SQL, etc). Combining them in different ways visually as a first-class operation would only interest me if it operated at the level of abstraction that visualizations currently operate at, many great examples of which are offered in the article (multiple code files, system architecture, network call).



My son who started programming at 7 pretty quickly moved on from languages like Scratch and Tinker. To the extent to which he uses them at all, it’s mostly to play games that are available in them. I’m not entirely convinced that he couldn’t have just started with Javascript or Python. It’s not like learning the syntax for a for loop¹ is that much harder than arranging the blocks in one of those block languages.

1. Although I must confess that I have a mental block about the second and third components of a C-style for-loop and whenever possible, I avoid them if I can.



> Although I must confess that I have a mental block about the second and third components of a C-style for-loop and whenever possible, I avoid them if I can.

Glad I'm not the only one! Despite programming for over a decade, I still mix up the order of `update` and `condition` sometimes in `(initialization, condition, update)` for loops. Probably because I spent too much time with Python and became so accustomed to only using `for x in y` style loops.



It has definitely pushed me to prefer foreach style loops in my coding which, I think, makes the code in general better (and, when writing in rust, faster as the generated code is able to eschew bounds checks).



Is Visual Basic still a thing? That was my start and it always felt like a good intro language. It was limiting but you could still make "real" desktop apps.



The dead fish metaphor is so interesting because programs aren’t static objects, they move.

Most visual programming environments represent programs in a static way, they just do it with pictures (often graphs) instead of text.

Perhaps there is something to be discovered when we start visualization what the CPU does at a very low level, as in moving and manipulating bits, and then build visual, animated abstractions with that.

A lot of basic bit manipulations might be much clearer that way, like shifting, masking etc. I wonder what could be built on top to get a more bird‘s eye view.



Yes, most diagrams are frustratingly static, even those that over lay step by step information on top.

I've found "diagrams" in my head, the mental models I use to reason about a problem, are not static. They are abstract machines, with cogs and gears and clutches and input shafts and output shafts, colors and arrows and action and movement, that work like a map for finding the solution, either directly, or at least leading me in the right direction to find somewhere to easily test my map/model/diagram/machine and probably improve it if I find somewhere it's less than ideal.

The issue is, many of those are not models or diagrams I ever got out of a book or a website. They're all painstakingly, agonizingly built up over years of failure and years of struggle and troubleshooting and riding blind into the thorny thicket of an issue, blindly feeling around for a solution and if I'm lucky, integrating that into a part of my mental model.

Even something like reading XML or JSON involves a good deal of visualized movement or color to be able to parse it quickly, something that no spec or tutorial ever bothers with, if they even could.

All I know is pointers never made sense until I had a class on assembly and was able to step through my program with a view of my entire program memory open and being forced to choose addressing mode for a memory reference before it clicked. Draw all the arrows you want in a textbook, but it wasn't until I saw it moving that I understood the machine.

Same with more complex stuff like Kubernetes. Like ok, draw me a shit load of boxes with labels on them like "LoadBalancer" and "Storage" in them, but if you don't relate that to the 500 line YAML I'm told to blindly apply, I still don't have a model of how any of it works.



The pointer thing is so relatable.

I don’t think we share exactly the same inner visuals, but they all relate to some intuitive understanding.

For me there are these unique and abstact visuals but they blend in with code (text) often. When I‘m intimate with a program I can often visualize line by line and fix bugs or change its behavior.

However, the things that‘s most removed from textual representation is data, memory, I/O (streams etc.) and it all moves.



> Yes, most diagrams are frustratingly static

Most source code is static. In case you want to show a diff, you normally do this with a side-by-side view, regardless of whether you show the diff as textual source code, or as two diagrams.

The transformations you sometime see in Youtube videos of moving and removing small bits of code to e.g. show the differences between a piece of functionality in an object oriented vs functional language are only useful because they require your eyes to follow relatively few points of interest.



> what the CPU does at a very low level,

Careful with what you wish for. Below the ISA abstractions there are endless nightmares of realities created and destroyed, time flowing in multiple directions, and side effects of other realities you can almost see, but won’t.



Bret Victor might argue visualizing a program is still "drawing dead fish".

The power of visual programming is diminished if the programmer aims to produce source-code as the final medium and only use visualization on top of language.

I disagree. We frequently break up large systems into chunks like modules, or micro-services, or subsystems. Often, these chunks' relationships are described using diagrams, like flowcharts or state transition diagrams, etc.

Furthermore, quite often there are zero direct code references between these chunks. Effectively, we are already organizing large systems in exactly the fashion the op is proposing. Inside each chunk, we just have code. But at a higher level viewpoint, we often have the abstraction described by a diagram. (Which is often maintained manually, separate from the repo.)

What exactly are the disadvantages here?



> We frequently break up large systems into chunks like modules, or micro-services, or subsystems. Often, these chunks' relationships are described using diagrams, like flowcharts or state transition diagrams, etc.

We frequently break up large systems into chunks like modules, or micro-services, or subsystems. Often, these chunks' relationships are documented using diagrams on a high level (like flowcharts or state transition diagrams etc.), but are not executable.

Fixed it for you.



> but are not executable.

Fixed it for you.

Dude, if you say the flow in the diagram is not executable, blanket in any fashion, then are you saying all of the programming projects you've been in are either monolithic systems, or have all failed?



> (Which is often maintained manually, separate from the repo.)

To me, this is the interesting avenue for investigation.

Rather than go from visualization -> code, how can we take an existing visualization that represents some underlying system (a code base, module dependencies, service architecture, a network topology, etc) and easily update the representation as the underlying system changes...



>I think we need to differentiate: Visualize a program vs. Visually program.

Not necessarily, programming with visual DSL is already a thing in the field of language oriented programming. Visual programming refers to different thing, but not impossible to make a connection between the two fields.

Visual programming is now more like umbrella term for projects (and research) exploring new ways of programming beyond the textual representation. Probably better to call it non-textual programming, because some of its ideas not tied to visuality, like structural editing.

Visual programming enviraments offers a concrete way to program general-purpose code, DSLs offers a very specific language to program in a domain (language orinted programming offers ways to invent these DSLs). Often visual programming applied to a specific domain, as an alternative to textual scripting languages. Maybe this confuses people, thinking they are less powerfull, non general-purpose.

What described in the article is a visual DSL based on diagrams, using as a source for the programming itself (which is exactly the same as UML). But the whole thing are not well thought, and I think only serves the purpose of dunk on visual programming or the people how are working on them for "not understanding what professional programmers need".



> I think we need to differentiate

My read of this post (especially the title) is the author does differentiate normally but chose to blur the lines here for a narrative hook & a little bit of fun.



The power of visual programming is diminished if the programmer aims to produce source-code as the final medium

Why would that be true?

It would be much more interesting to investigate "visual first" programming where the programmer aims to author, and more importantly think, primarily in the visual medium.

What advantages would that give? The disadvantages are so big that it will basically never happen for general purpose programming. Making a brand new language make any sort of inroads in finding a niche takes at least a decade, and that's usually with something updating and iterating on what people are already doing.



I think quite interesting starting point is general purpose visual medium which is good enough to be used for programming, too.

Aka: more visual/structured medium to some use cases we use text today.



One can start from typical UIs and start tinkering from there why it isn't good enough for programming.

Good first step is to notice that we dont have even static data objects. Still UIs are full of them (forms) but you cannot copy paste or store them as a whole, everything is ad-hoc. Now imagine that every form could be handled like Unity scriptable object. And maybe something what prefab variants do: data inheritance.



As someone with a hardware background, I'll throw in my $0.02. The schematic capture elements to connect up large blocks of HDL with a ton of I/O going everywhere are one of the few applications of visual programming that I like. Once you get past defining the block behaviors in HDL, instantiation can become tedious and error-prone in text, since the tools all kinda suck with very little hinting or argument checking, and the modules can and regularly do have dozens of I/O arguments. Instead, it's often very easy to map the module inputs to schematic-level wires, particularly in situations where large buses can be combined into single fat lines, I/O types can be visually distinguished, etc. IDE keyboard shortcuts also make these signals easy to follow and trace as they pass through hierarchical organization of blocks, all the way down to transistor-level implementations in many cases.

I've also always had an admiration for the Falstad circuit simulation tool[0], as the only SPICE-like simulator that visually depicts magnitude of voltages and currents during simulation (and not just on graphs). I reach for it once in a while when I need to do something a bit bigger than I can trivially fit in my head, but not so complex that I feel compelled to fight a more powerful but significantly shittier to work with IDE to extract an answer.

Schematics work really well for capturing information that's independent of time, like physical connections or common simple functions (summers, comparators, etc). Diagrams with time included sacrifice a dimension to show sequential progress, which is fine for things that have very little changing state attached or where query/response is highly predictable. Sometimes, animation helps restore the lost dimension for systems with time-evolution. But beyond trivial things that fit on an A4 sheet, I'd rather represent time-evolution of system state with timing diagrams. I don't think there's many analogous situations in typical programming applications that call for timing diagrams, but they are absolutely foundational for digital logic applications and low-level hardware drivers.

[0]: https://www.falstad.com/circuit/



> The schematic capture elements to connect up large blocks of HDL with a ton of I/O going everywhere are one of the few applications of visual programming that I like.

Right. Trying to map lines of code to blocks 1 to 1 is a bad use of time. Humans seem to deal with text really well. The problem becomes when we have many systems talking to one another, skimming through text becomes far less effective. Being able to connect 'modules' or 'nodes' together visually(whatever those modules are) and rewire them seems to be a better idea.

For a different take that's not circuit-based, see how shader nodes are implemented in Blender. That's not (as far as I know a) a Turing complete language, but it gives one idea how you can connect 'nodes' together to perform complex calculations: https://renderguide.com/blender-shader-nodes-tutorial/

A more 'general purpose' example is the blueprint system from Unreal Engine. Again we have 'nodes' that you connect together, but you don't create those visually, you connect them to achieve the behavior you want: https://dev.epicgames.com/documentation/en-us/unreal-engine/...

> I don't think there's many analogous situations in typical programming applications that call for timing diagrams

Not 'timing' per se (although those exist), but situations where you want to see changes over time across several systems are incredibly common, but existing tooling is pretty poor for that.



As much as I prefer to do everything in a text editor and use open-source EDA tools/linters/language servers, Xilinx's Vivado deserves major credit from me for its block editor, schematic view, and implementation view.

For complex tasks like connecting AXI, SoC, memory, and custom IP components together, things like bussed wires and ports, as well as GUI configurators, make the process of getting something up and running on a real FPGA board much easier and quicker than if I had to do it all manually (of course, after I can dump the Tcl trace and move all that automation into reproducible source scripts).

I believe the biggest advantage of the Vivado block editor is the "Run Block Automation" flow that can quickly handle a lot of the wire connections and instantiation of required IPs when integrating an SoC block with modules. I think it would be interesting to explore if this idea could be successfully translated to other styles of visual programming. For example, I could place and connect a few core components and let the tooling handle the rest for me.

Also, a free idea (or I don't know if it's out there yet): an open-source HDL/FPGA editor or editor extension with something like the Vivado block editor that works with all the open source EDA tools with all the same bells and whistles, including an IP library, programmable IP GUI configurators, bussed ports and connections, and block automation. You could even integrate different HDL front-ends as there are many more now than in the past. I know Icestudio is a thing, but that seems designed for educational use, which is also cool to see! I think a VSCode webview-based extension could be one easier way to prototype this.



> Also, a free idea (or I don't know if it's out there yet): an open-source HDL/FPGA editor or editor extension with something like the Vivado block editor that works with all the open source EDA tools with all the same bells

"Free idea: do all this work that it takes hundreds of people to do. Free! It's even free! And easy!"

Lol you must be one of those "the idea is worth more than the implementation types.



There's no reason they can't instead be used to show how data transforms. The sort of 'flow wall' someone sees in a large industrial setting (think water/waste water treatment plants, power plants, chemical plants, etc) or process mockup diagrams for spreadsheet heavy modpacks (I'm looking at you GregTech New Horizons).

Data can instead be modeled as inputs which transform as they flow through a system, and possibly modify the system.



Building block diagrams in Vivado to whip up quick FPGA designs was a pleasant experience. Unfortunately the biggest problem wasn't the visual editor. The provided implementations of the AMD/Xilinx IP cores are terrible and not on par with what you would expect first party support to be. The other problem was that their AXI subordinate example was trash and acted more like a barrier to get started. What they should have done is acquire or copy airhdl and let people auto generate a simpler register interface that they can then drag and drop.



i remember using the falstad sim constantly at university a decade ago. super helpful and so much more intuitive than any spice thing. cool to see that it's still around and used



Great article. Any sufficiently complex problem requires looking at it from different angles in order to root out the unexpected and ambiguous. Visualizations do exactly that.

This is especially important in the age of AI coding tools and how coding is moving from lower level to higher level expression (with greater levels of ambiguity). One ideal use of AI coding tools would be to be on the lookout for ambiguities and outliers and draw the developer's attention to them with relevant visualizations.

> do you know exactly how your data is laid out in memory? Bad memory layouts are one of the biggest contributors to poor performance.

In this example from the article, if the developer indicates they need to improve performance or the AI evaluates the code and thinks its suboptimal, it could bring up a memory layout diagram to help the developer work through the problem.

> Another very cool example is in the documentation for Signal's Double Rachet algorithm. These diagrams track what Alice and Bob need at each step of the protocol to encrypt and decrypt the next message. The protocol is complicated enough for me to think that the diagrams are the source of truth of the protocol

This is the next step in visualizations: moving logic from raw code to expressions within the various visualizations. But we can only get there bottom-up, solving one particular problem, one method of visualization at a time. Past visual code efforts have all been top-down universal programming systems, which cannot look at things in all the different ways necessary to handle complexity.



> Any sufficiently complex problem requires looking at it from different angles in order to root out the unexpected and ambiguous. Visualizations do exactly that.

To me, this is an underappreciated tenet of good visualization design.

Bad/lazy visualizations show you what you already know, in prettier form.

Good visualizations give you a better understanding of things-you-don't-know at the time of designing the visualization.

I.e. If I create a visualization using these rules, will I learn some new facts about the "other stuff"?



> Bad memory layouts are one of the biggest contributors to poor performance.

This will depend on the application, but I've encountered far more of the "wrong data structure / algorithm" kind of problem, like iterating over a list to check if something's in there when you could just make a map ("we need ordering": sure, we have ordered maps!).



The social problem with visual programming is indeed the same as with "Mythical Non-Roboticist". But there is quite some issues on it on the technical side too:

- Any sufficiently advanced program has non-planar dataflow graph. Yes "pipelines" are fine, but anything beyond that - you are going to need labels. And with labels it becomes just like plain old non-visual program, just less structured.

- Code formatting becomes much more important and much harder to do. With textual program representation it is more or less trivial to do auto-formatting (and the code is somewhat readable ever with no formatting at all). Yet we still don't have a reliable way to layout a non-trivial graph so that it doesn't look like a spagetti bowl. I find UML state machines very useful and also painful because after every small edit I have to spend ten minutes fixing layout.

- Good data/program entry interfaces are hard to design and novel tools rarely do a good job of it the first time. Most "visual" tools have a total disaster for a UI. Vs. text editors that were incrementally refined for some 70 years.



+1

I'd add versioning and diff tools as another critical advantage for text. If your visual tool can't provide a superior diff experience, then it's dead on arrival for most serious projects.



> Any sufficiently advanced program has non-planar dataflow graph.

For some reason this reminded me of the elevated rails coming in the next Factorio update. Maybe visual editors need something similar? Even Logisim can distinguish between a node (three or more wires join) and two wires that just cross without interacting.



I disagree. As someone who likes PlantUML specifically because it lets me store diagrams as code, I too frequently end up spending just as much time trying to indirectly coerce the thing into the layout I want since it won’t let me position things explicitly.



I am surprised I have not seen LabView mentioned in this thread. It is arguably one of the most popular visual programming languages after Excel and I absolutely hate it.

It has all the downsides of visual programming that the author mentions. The visual aspect of it makes it so hard to understand the flow of control. There is no clear left to right or top to bottom way of chronologically reading a program.



I agree.

LabView’s shining examples would be trivial Python scripts (aside from the GUI tweaking). However, it’s runtime interactive 2D graph/plot widgets are unequaled.

As soon as a “function” becomes slightly non trivial, the graphical nature makes it hard to follow.

Structured data with the “weak typedef” is a minefield.

A simple program to solve a quadratic equation becomes an absolute mess when laid out graphically. Textually, it would be a simple 5-6 line function that is easy to read.

Source control is also a mess. How does one “diff” a LabView program?



When I had some customers working with it a few years ago, they were trying to roll out a visual diff tool that would make source control possible.

I don't know if they ever really delivered anything or not. That system is such an abomination it drove me nuts dealing with it, and dealing with scientists who honestly believed it was the future of software engineering and all the rest of us were idiots for using C++.

The VIs are really nice, when you're connecting them up to a piece of measurement hardware to collect data the system makes sense for that. Anything further and it's utter garbage.



Python's equivalent of LabView would be Airflow. Both solve the same CS problem (even though the applications are very different).

Airflow it almost universally famous for being a confusing, hard to grasp framework. But nobody can actually point to anything better. But yeah, it's incomparably better than LabView, it's not even on the same race.



> Source control is also a mess. How does one “diff” a LabView program?

With LabVIEW, I'm not sure you can. But in general, there are two ways: either by doing a comparison of the underlying graphs of each function, or working on the stored textual representations of the topologically sorted graphs and comparing those. On a wider view, in general, as different versions of any code are nodes in a graph, a visual versioning system makes sense.



This is exactly why a visual representation of code can be useful for analyzing certain things, but will rarely be the best (or even preferred) way to write code.

I think a happy medium would be an environment where you could easily switch between "code" and "visual" view, and maybe even make changes within each, but I suspect developers will stick with "code" view most of the time.

Also, from the article: > Developers say they want "visual programming"

I certainly don't. What I do want is an IDE which has a better view into my entire project, including all the files, images, DB, etc., so it can make much better informed suggestions. Kind of like JetBrains on steroids, but with better built-in error checking and autocomplete suggestions. I want the ability to move a chunk of code somewhere else, and have the IDE warn me (or even fix the problem) when the code I move now references out-of-scope variables. In short, I want the IDE to handle most of the grunt work, so I can concentrate on the bigger picture.



And Simulink. I lost years in grad school to Simulink, but it is very nice for complex state machine programming. It’s self documenting in that way. Just hope you don’t have to debug it because that’s a special hell.



I quite like Simulink because it's designed for simulating physical systems which are naturally quite visual and bidirectional. Like circuit diagrams, pneumatics, engines, etc. You aren't writing for loops.

Also it is actually visually decent unlike LabVIEW which looks like it was drawn by someone who discovered MS Paint EGA edition.



Simulink is based on the block diagram notation used in control theory for decades earlier - before personal computers and workstations. The notation is rigorous enough you can pretty much pick up a book like the old Electro-Craft motor handbook (DC Motors Speed Controls Servo Systems), enter the diagrams into Simulink, and run them. With analogous allowances to how you would enter an old schematic into a SPICE simulator.

LabView was significantly more sui generis and originated on Macintosh about a decade earlier. I don't hate it but it really predates a lot of more recent user experience conventions.



Most industrial automation programming happens in an environment similar to LabView, if not LabView itself. DeltaV, Siemens, Allen-Bradley, etc. Most industrial facilities are absolutely full of them with text-based code being likely a small minority for anything higher level than the firmware of individual PLCs and such.



A lot of these environments inherit a visual presentation style (ladder logic) that comes from the pre-computer era, and that works extremely well for electrical schematics when conveying asynchronous conditional behaviors to anyone, even people without much of a math background. There's a lot of more advanced functions these days that you write in plain C code in a hierarchical block, mostly for things like motor control.



I like function block on Schneider platform for Process control with more analog values than Boolean. It visualizes the inputs, control loop, and output nicely.

Numeric values in ladder feels a bit kludgey



These are standardized IEC 61131-3 languages https://en.wikipedia.org/wiki/IEC_61131-3

Ladder, SFC and FBD are all graphical languages used to program PLC's. Ladder is directly based on electrical ladder schematics and common in the USA. The idea was electricians and plant technicians who understood ladder schematics could now program and troubleshoot industrial computers. SFC and FBD were more common in Europe but nowadays you mostly see Structured Text, a Pascal dialect (usually with bolted on vendor OOP lunacy.)

I will admit that for some programs, Ladder is fantastic. Of course ladder can be turned into horrid spaghetti if the programmer doesn't split up the logic properly



I think the whole flow concept is really only good for media pipelines and such.

In mathematics, everything exists at once just like real life.

In most programming languages, things happen in explicit discrete steps which makes things a lot easier, and most node based systems don't have that property.

I greatly prefer block based programming where you're dragging rules and command blocks that work like traditional programming, but with higher level functions, ease of use on mobile, and no need to memorize all the API call names just for a one off tasks.



What would be useful is a data flow representation of the call stack of a piece of code. Generated from source, and then brought back from the GUI into source.



I don't hate it, I feel it's pretty good for talking to hardware, (understanding) multi-threading, agent oriented programming, message cues, etc.

It's also fairly good for making money: the oil and gass industry seems to like using it (note: n = 1, I only did one oil n gas project with it).



> How does version control work with Labview?

Labview does have diff and merge tools. It feels kind of clunky in practice, kind of like diffing/merging MS Office files. In my experience people think of versions of LabView code as immutable snapshots along a linear timeline and don't really expect to have merge commits. Code versions may as well be stored as separate folders with revision numbers. The mindset is more hardware-centric; e.g., when rewiring a physical data acquisition system, reverting a change just means doing the work over again differently. So LabView's deficiencies in version control don't stand out as much as they would in pure software development.

https://www.ni.com/docs/en-US/bundle/labview/page/comparing-...



As someone who used to use (and hate) LabVIEW, a lot of my hatred towards it was directed at the truly abysmal IDE. The actual language itself has a lot of neat features, especially for data visualization and highly parallel tasks.



Anyone who mentions visual scripting without mentioning the game industry just hasn't done enough research at all. Its actually a really elegant way to handle transforming data.

Look up Unreal blueprints, shader graphs, procedural model generation in blender or Houdini. Visual programming is already here and quite popular.



[post author] I am familiar with those and have used a couple. There are similar examples in music, where visual programming dominates.

The implied audience of this post (not clear) is people writing business applications, web dev, etc. The examples are picked to reflect what could be useful to those developers. In other words, all the examples you mentioned are great but they are not how a "software engineer in a software company" does their job.



> In other words, all the examples you mentioned are great but they are not how a "software engineer in a software company" does their job.

creating blueprints or max/msp programs is definitely software engineering, it requires you to think about correct abstractions, computations, data flow and storage, etc.

also, there's currently 398 Rust jobs worldwide advertised on linkedin, vs. 1473 for "unreal blueprints"



My experience is that the software engineers at game companies generally hate the visual programming tools. They want to work with code. It's the game designers who (sometimes) like using visual tools.



I spent about a year working with blueprints a while back and I found some things just really annoying. like making the execution line go backwards into the a previous block. if you do it straight there, it wont let you, if you use a single reroute note you get an ugly point, so you have to use two reroute nodes to get it to work properly and nicely. Also they don't have all the nodes you need so you end up having to write some new ones anyway



And AI - which kind of changed the game in the recent years. A "blueprints copilot" akin to Github Copilot will be very difficult to create because there's no "blueprints text" to train an AI on. Nowadays in my hobby pet projects I find it easier to write C++ with copilot than Blueprints.



There's a JSON format of the blueprints that you can see when you copy/paste. Its just a bit ambiguous than the usual binary format. Its not an impossible problem at all.



Not an impossible problem only in theory. It's currently practically impossible and will take at least a year to solve if anybody starts to work on this at all.

Since my current project does involve wrangling AI to do stuff - forcing it to output a consistent, complete, large JSON with an exact specific format is very difficult and takes a lot of time (you won't be able to draw Blueprints line by line to show to the user that AI is processing). Definitely no autocomplete-like experiences maybe ever.

For example, look at the text representation of these 6 (!) nodes:

https://blueprintue.com/blueprint/yl8hd3-8/

It's enormous.

And the second even bigger problem: On forums and basically everywhere all users share screenshots with descriptions. There's not enough training data for anything meaningful.

I tried to force copilot/gpt to output even a small sample of copy-pastable blueprint and it just can't.



As someone who works for games, I think the biggest problem of node-based systems is... they're all different (in terms of UI/UX).

Unreal blueprints, Substance Designer, Houdini, Blender's geometry node, Unity shader nodes... they all look different and act differently. Different shortcuts and gestures. Different window/panel management.

Different programming languages have different syntax rules and libraries, of course. But at least they're all manipulated with one single interface, which is your editor. If you use vim binding, you don't need to worry about "what pressing K does". It moves the cursor down for all the languages.

People who spent X hours customizing their vim/emacs will benefit from them no matter what language they use next. I spent a lot of time customizing my Houdini keybindings and scripts, and this effort will be thrown out the window if I later switch to Blender.



You know, this is actually really insightful. A standard graph format that all these tools could import/export to could lead to a lot more reusable tooling.

The incentives aren't quite there at the moment but maybe someone like Microsoft or Jetbrains takes a stab at it.



One could even go further and expand this to the players themselves, as there are certain games that might be viewed as visual programming tools. Factorio is a great example, as, conceptually speaking, there isn't much of a difference between a player optimising their resource flow in the game vs a developer managing the data flow in a State Machine.



I've been using ComfyUI recently to manage complex image diffusion workflows, and I had no idea it was inherited from much older shader editors and vfx. It's a shame we can end up using a tool for years without knowing anything about its predecessors.



One major difference I’ve seen in shader graph type tools is that they are stateless, or almost stateless. The output of the shader graph is a function of time and some fixed parameters, and there are rarely feedback loops. When there are feedback loops in shader graphs, they are represented explicitly via nodes like TouchDesigner’s feedback TOP.

This way of expressing computations lends itself well for shader programming where the concurrency of the GPU discourages manipulation of arbitrary state.

In contrast, business logic programmed for the CPU is generally more stateful and full of implicit feedback loops. IMO these types of computations are not expressed well using node based visual programming tools because the state manipulation is more complex.



I used blueprint's predecessor 'kismet' quite extensively. I absolutely hated it. Give me unrealscript any day. Blueprint is popular because that's all you have. They removed unrealscript. To do anything even slightly complex you have to use C++ now.



I wonder the sweet point between BP and C++. One of my friends is making a commercial indie game in UE and he is doing everything in BP because he is an artist, so C++ is particularly daunting for him. He did complain about the spaghettis hell he eventually came upon, without anyway to solve it, but from the number of wishlistings (targeting 10K), I'd say it is probably going to be a successful game, judging by first indie game standard.



Blueprints gets used because the only alternative in UE, writing decade-old paradigm C++ code with 2 decades old macro DSL on top of it, is a lot worse.

Unity has had multiple visual programming packages and people don't really care. Writing 2017 era C# paradigm code with an API resembling 2004 Macromedia Flash is not nearly as bad.



> Unity has had multiple visual programming packages and people don't really care.

People cared enough for Unity to buy one and make it official but Unity doesn't care so it mostly just rots.



> One reason is because we think that other, more inexperienced, programmers might have an easier time with visual programming. If only code wasn't as scary! If only it was visual! Excel Formula is the most popular programming language by a few orders of magnitude and it can look like this:

> =INDEX(A1:A4,SMALL(IF(Active[A1:A4]=E$1,ROW(A1:A4)-1),ROW(1:1)),2)

Ahem. Excel is one of the most visual programming environment out there. Everything is laid out on giant 2d grids you can zoom in and out. You can paint arrows that give you the whole dependency tree. You can select, copy, paste, delete code with the mouse only. You can color things to help you categorize which cell does what. You can create user inputs, charts and pivot grids with clicks.



As a programmer who had used Excel for years, seeing my accountant start typing a formula, change sheets, select some cells, go back, repeat, was a learning process. I didn't even know you could do that, and also, I hated it. But it worked very well for him.

I've more recently been exposed to a few spreadsheets that are used to calculate quotes in major insurance businesses when I was asked to create an online process instead, replicating the questions and formula.

They're things of horrifying eldritch beauty. I seem to always find at least one error, and no one I'm allowed to talk to ever really knows how they work since they're built up over years. Those dependency arrows are a life saver.



> I seem to always find at least one error

Every time I see so spreadsheet where the dependency is hard to track, I've found enough errors that the results were completely bogus.

Also every time, nobody cared.



In my case at least, they probably accounted for it somewhere by adjusting rates elsewhere so it works out. So it's a bit risky to just change, similar to code that is known to be wrong but too hard to change since "everything works".



Oh sweet summer child. You will probably never experience running into the 10 million cell limit on a google sheet with more than a hundred sheets and waiting a quarter of an hour for your spreadsheet to update.



I have had to debug insane Excel sheets, which were used to generate C code, based on the geometric properties of an object.

Excel works very well for describing many, simple relationships. It totally falls apart the moment you have complex relationships, as they become mentally untraceable. Functions allow you to abstract away functionality, referencing cells does not.

I am pretty certain that Excel is one of the most misused tools and suffers the most from "I use it because I know it".



Excel could do this so much better though (and I think excel is the best candidate for visual scripting overhaul). The cell could have two parts; top parts is the function signature (other cells could reference by signature, or by cell number), bottom part is the code. Each cell is a function.

People put huge unreadable basic functions in that tiny box. It's such an obvious pain point, surprised it's never been addressed. Replace vba with c#, have a visual line linking cells to other cell references, bam million dollar product.



A basic problem I have, looking at an Excel spreadsheet, is I don't know which cells are calculated by a formula, which are constants.

Maybe it would be easier if the spreadsheet was divided into an upper part with only constant-cells and a lower part with only calculated values, would that help me?



> A basic problem I have, looking at an Excel spreadsheet, is I don't know which cells are calculated by a formula, which are constants.

Use Ctrl-` (show formulas).



Thanks for the tip I will use it next time I open an Excel spreadsheet.

I'm also thinking in terms of perhaps having a different visual style for cells with formulas, when the spreadsheet is presented on paper etc.



> You can paint arrows that give you the whole dependency tree.

Sorry, is that a manual process, or is there a option in Excel to show multi-ancestor dependencies?

I'm aware that you can double click to see a single cells inputs, but I want to go deeper.



Stil impossible to know what a Excel sheet does only by looking at it. The 2d grid obfucates the relationships between data.

Power BI does (almost) everything Excel does but better.



I think people get too hung up on the visuals. There was a (failed) attempt to create something called intentional programming by Charles Simonyi. That happened in the middle of the model driven architecture craziness about 20 years ago.

In short, his ideas was to build a language where higher level primitives are created by doing transformations on lower level syntax trees. All the way down to assembly code. The idea would be that you would define languages in terms of how they manipulate existing syntax trees. Kind of a neat concept. And well suited to visual programming as well.

Weather you build that syntax tree by typing code in an editor or by manipulating things in a visual tool is beside the point. It all boils down to syntax trees.

Of course that never happened and MDA also fizzled out along with all the UML meta programming stuff. Meta programming itself is of course an old idea (e.g. Lisp) and still lives on in things like Ruby and a few other things.

But more useful in modern times is how refactoring IDEs work: they build syntax trees of your code and then transform them, hopefully without making the code invalid. Like a compiler, an IDE needs an internal representation of your code as a syntax tree in order to do these things. You only get so far with regular expressions and trying to rename things. But lately, compiler builders are catching onto the notion that good tools and good compilers need to share some logic. That too is an old idea (Smalltalk and IBM's Visual Age). But it's being re-discoverd in e.g. the Rust community and of course Kotlin is trying to get better as well (being developed by Jetbrains and all).

But beyond that, the idea seems a bit stuck. Too bad because I like the notion of programs being manipulated by programs. Which is what refactoring does. And which is what AI also needs to learn to do to become truly useful for programming.



Most of this isn't visual "programming" just good explanatory diagrams. I think it gets to a core issue which is a dichotomy between:

- trying to understand existing programs - for which visuals are wanted by most but they usually need concious input to be their best

- programming (creating new code) itself - where the efficiency of the keyboard (with its 1d input that goes straight to spaghetti code) has never been replaced by visual (mouse based?) methods other than for very simple (click and connect) type models



You are right. The diagrams are used as explanations not as the source of the program. But wouldn't it be neat if when you sketch out the state transition in a diagram (how I think about the state transitions), _that diagram_ was the source of truth for the program?

That is the implied point: let's go to places where we already draw diagrams and check if we can elevate them into the program



This can be really tricky to do. I reached the limit of my brain's working capacity designing a priority inheritance system, and sketched the state machine out in a dot file, visualized with graphviz - this worked really well for reasoning through the correctness of the algorithm and explaining it to others. I tried to structure the implementation code to match it and I was able to get pretty close; but the actual states were a bunch of bit-packing and duplicated control flow to get optimal assembly output for the hottest paths. Each one of those changes was easy to reason about as an isolated correct transformation of the original structure in code, but would have been a mess visually.



That sounds super interesting!

Did I understand correctly that the additional complexity came because you needed to emit optimal assembly? Or was implementing the logic from the state machine complicated enough?



Designing the state machine was hard. The implementation of that state machine was not that bad, because I'd spent so much time thinking through the algorithm that I was able to implement it pretty quickly. The implementation difficulty was optimizing the uncontended case - I had to do things like duplicate code outside the main CAS loop to allow that to be inlined separately from the main body, structure functions so that the unlock path used the same or fewer stack bytes than the lock path, etc. Each of those code changes were straightforward but if I had faithfully copied all those the little tweaks into the state machine diagram, it would be so obfuscated that it'd hide any bugs in the actual core logic.

So I decided that the diagram was most useful for someone looking to understand the algorithm in the abstract, and only once they had been convinced of its correctness should they proceed to review the implementation code. The code was a terrible way to understand the algorithm, and the visualization was a terrible way to understand the implementation.



From what I've seen when code is generated from formal specs it ends up being inflexible. However, do you think it would be valuable to be able to verify an implementation based on a formal spec?



I just realized my previous comment left out what I was trying to say—my bad! I think what I was trying to ask was: would it be possible to generate a formal specification from a graphical representation, and then use that specification to verify the source?

Also thank you for those links! I'll definitely give them a read.



I'm far from an expert in formal verification, I probably should be doing more of it than I do. From my two links, the way I've seen formal verification work is to either translate the code line-by-line in an automated or manual way into a formal language, then check for possible orderings that violate properties you care about; or define a formal model of the state machine, and insert validation of all the transitions in the code.

If you were going to do formal verification from the graphical representation, it would be on the algorithm; namely does it always converge, does it ever deadlock, does it ever fail mutual exclusion. If the goal is for a computer to analyze it, it can be precisely as complex as the source code, so yes. But at that point it's not useful visually for a human to read.



Virtually all diagrams are representations of declarations that are the source of truth. Visual editors just help you edit that, but rarely get close to validating the underlying structure.

For things like programming languages and markdown, users switch between the modes. For something like SVG, users rarely learn or solve problems at the declaration level.

The real questions come with declaration re-usability and comparison. Two pdf's can look exactly the same, but be vastly different declarations, which makes comparison and component re-use essentially impossible.

It turns out, much of the benefit of visual editors is built on the underlying declaration model where it supports user edit/inspection goals.

So I think the point is not to have the visual be the source of truth, but to have more visualization and visual editors for the sources we have.

There are/were excellent visual editors for Java and Apple GUI's that supported round-tripping (mode-dependent source of truth). But we seem to have abandoned them not just because they're old, but because of the extra scaffolding required to do the round-tripping.

So my take-away has been that any visualization must be source-first and source-mainly - with no or minimal extra metadata/scaffolding, as markdown is now. That would mean the implied point is that we should visualize (or otherwise abstract to understand) the source we have, instead of using diagrams as the source of truth.



> Schematix provides diagrams as a dynamic resource using its API. They aren't images you export, they don't end up in My Documents. This isn't Corel Draw. In Schematix, you specify part of your model using a graph expression, and the system automatically generates a diagram of the objects and relations that match. As your Schematix model changes, the results of the graph expression may change, and thus the visual diagram will also change. But the system doesn't need you to point and click for it. Once you've told it what you want, you're done.

What an interesting tool! It's rare to see robust data models, flexible UX abstractions for dev + ops, lightweight process notations, programmatic inventory, live API dashboards and a multi-browser web client in one product.

Do you have commercial competitors? If not, it might be worth doing a blog post and/or Show HN on OSS tooling (e.g Netbox inventory, netflow analysis of service dependencies) which offer a subset of Schematix, to help potential customers understand what you've accomplished.

Operational risk management consultants in the finance sector could benefit from Schematix, https://www.mckinsey.com/capabilities/risk-and-resilience/ou.... Lots of complexity and data for neutral visualization tooling.



Schematix is somewhat unique. Direct competitors? -- not exactly, but IT asset managers, DCIM, BC/DR tools, and CMDBs are all competitors to some degree.

Some of our best users are professional consultants who use us for a project which often introduces us to a new customer.

A Show HN would certainly be in order. Thanks for the thoughts!



Yes, in order to be visual coding (or better yet specification) it needs to be executable in it's native form, or maybe a very direct translation.

The concept of an executable specification first came to my attention in IEC 61499 the standard for Distributed Automation. First published in 2005 it was way, way ahead of it's time, so far ahead it is still gaining traction today.

Shout out to anyone reading who was involved in the creation of IEC 61499 in 2005, it was a stroke of genius, and for it's time, orders of magnitude more so. It is also worth a look just to prompt thinking for any one involved in distributed systems of any kind.

Initially I thought there was no way you could have such a thing as an executable specification, but then, over many years I evolved to a place where I could generically create an arbitrary executable specification for state based behavior (see my other post this topic).

I believe I have found the best achievable practice to allow defining behaviors for mission/safety critical functionality, while avoiding implicit state.



I forget its name, but there was an IBM graphical tool , with which you create UML diagrams and it in turn created code (Java IIRRC).

The intermediate representation was in sexp !



Programming “via” Visualization — doesn’t scale. Great for demos. Good in limited places.

Visualizations “of” a Program — quite useful. Note there lots of different ways to visualize the same program to emphasise / omit different details. The map is not the territory, all models are wrong etc.



It works and even scales up in some cases.

For example having models of capacitor and resistor you can put them together in schematic. Which in turn can be a part of the bigger design. Then test it in simulator. That's how Simplorer works. Alternatively you can write the code in VHDL or Modelica. But visual is quicker, easier, and more reliable.

Obviously it works well for UI, was used for decades now.

As for the rest,... there are visual programmers for robots, mostly for kids.



Schematics don't scale well at all - net labels and multiple sheets demonstrate this.

HDLs rule for gate and transistor level circuit design. I don't know what major PCB houses do but I'd be horrified to discover that 16-layer boards still have a visually built schematic producing their netlist: just finding the right pad on 256BGA components would be awful, let alone finding what else is connected to that net.



> Schematics don't scale well at all

Schematics aren't supposed to scale. They're a lossy representation of a subcircuit without caring about the intricate details like footprints or electrical/electro-mechanic constraints.

PCB designers largely don't use HDLs because they don't really solve their problems. Splitting a BGA component into subcircuits that have easily legible schematics is not hard, but it's also not what they care about. That shit is easy, making sure the blocks are all connected correctly.

Verifying the electrical constraints of the 256 pad component is much harder and not represented in the schematic at all. They need to see the traces and footprint exactly.

As an example, the schematic doesn't tell you if a naive designer put the silkscreen label and orientation marker underneath the component which will cause manufacturing defects like tombstoning in jellybean parts.



I think the difficulty here is addressing: who is your target audience? Depending on that answer, you have different existing relatively succesful visual programming languages. For example, game designers have managed to make good use of Unreals' blueprints to great effect. Hobbists use Comfy UIs node language to wire up generative AI components to great effect. As far as generic computing goes, Scratch has managed to teach a lot of programming principles to people looking to learn. The problem comes in when you try and target a generic systems programmer: the target is too abstract to be able to create an effective visual language. In this article, they try and solve this issue by choosing specific subproblems which a visual representation is helpful: codebase visualization, computer network topology, memory layouts, etc...but none of them are programming languages



[post author] I agree. On many domains you can find a great mapping between some visual representation and how the developer (beginner or not) wants to think about the problem.

I personally don't see any one pictorial representation that maps to a general programming language. But if someone does find one, in the large and in the small, that'd be great!



> I personally don't see any one pictorial representation that maps to a general programming language.

I agree. What I've had in mind for a while now is very different from this.

What I envision is "text" in the sense that it's not a diagram, but more advanced textual representation. Over hundreds of years mathematicians have evolved a concise, unambiguous, symbolic notation for formulae, yet programmers are still using tools that are backward compatible with dot matrix terminals from the 60's: simple characters used to write lines in files.

Blocks, conditions, iteration, exits (return, exception, etc.,) pipelines, assignment, type and other common concepts could be represented symbolically. The interface would still be text-like, but the textual representation would be similar to mathematical notation, where the basic constructs of code are depicted as common, well understood, dynamically drawn symbols that programmers deeply inculcate.

Key properties include extreme concision and clarity of the "instruction pointer." Concision is crucial to reduce the cognitive cost of large amounts of logic. The latter is a thing that is entirely obscured in most visual programming schemes and also absent from conventional mathematical notation: the location of the current instruction is absolutely crucial to understanding logic.

I wish I had more time to elaborate what I have in mind, much less actually work on it.



Scratch is the only type of visual programming I've enjoyed using. It's easy to read if you're an experienced programmer because it has the same structure as regular code, and it's easy to read for beginners because everything is broken into large blocks that have what they do written right on them. The way code is structured in most programming languages is actually very logical and intuitive, and it's the most successful system we have so far. The problem for beginners is that they can't figure out if they enjoy programming until they've learned the syntax, which can be very discouraging for some people. I've seen Scratch bridge that gap for people a couple of times, and I think it's probably the best model when it comes to teaching people to code.

I think other types of models would only be useful for situations where writing code isn't the most intuitive way to make something. From my limited experience, a visual system for making shaders is a pretty good idea, because ideally, you don't want to have many conditional branches or loops, but you might have a lot of expressions that would look ugly in regular code.



I'm going to throw a vote in here for Grasshopper, the visual programming language in Rhino3d as doing it the right way. It is WIDELY used in architectural education and practice alike.

Unfortunately, most visuals you'll get of the populated canvas online are crap. And for those of us who make extremely clean readable programs it's kind of a superpower and we tend to be careful with how widely we spread them. But once you see a good one you get the value immediately.

Here's a good simple program I made, as a sample. [0]

Also, I want to give a shout-out to the Future of Coding community in this. The Whole Code Catalog [1] and Ivan Reese's Visual Programming Codex [2] are great resources in the area.

I also have to mention, despite the awful name, Flowgorithm is an EXCELLENT tool for teaching the fundamentals of procedural thinking. [3] One neat thing is you can switch between the flow chart view and the script code view in something like 35 different languages natively (or make your own plugin to convert it to your language of choice!)

p.s. If you are used to regular coding, Grasshopper will drive you absolutely freaking bonkers at first, but once you square that it is looping but you have to let the whole program complete before seeing the result, you'll get used to it.

[0] https://global.discourse-cdn.com/mcneel/uploads/default/orig...

[1] https://futureofcoding.org/catalog/

[2] https://github.com/ivanreese/visual-programming-codex

[3] http://flowgorithm.org/



Agreed, Rhino/Grasshopper is an amazing tool, especially once you start adding in C# components. I’ve been using it off and on for several years on custom consumer product projects. It’s an under utilized workflow in many fields requiring 3D modeling imo. I just finished a custom VR gasket generator for the Quest 3 that uses face scans from iPhone as the input and the project wouldn’t have been possible without Grasshopper: https://youtu.be/kLMEWerJu0U


That's rad - thanks for sharing! I'll try to watch the whole thing when I'm not on deadline.

My jewelry work [0] is almost all in Grasshopper, as I've built up such a workflow there over the past... 8 years? that I don't need custom tools for most of it.

But my research work is all about building custom tools in C#. In fact I just finally published my component library yesterday [1]. Frankly I should have released it years ago, but I finally just bit the bullet.

[0] https://Xover0.com [1] https://www.food4rhino.com/en/app/horta



Vaguely related: Rhino 3D has the best interface of any 3D modeling tool I've ever used and I'm sad it is not the norm. Is integration between command line and UI is absolutely amazing.

I remember when I first tried to SketchUp I was horrified at how atrocious the UI is compared to rhino 3D.



Yeah, not quite "visual programming,' but there is a similar argument to be made about a program's user interface and how its design suggests it should be used. At this point, that's probably a far better explored area than the same aspect of visual programming.

That said - Rhino is one of the exemplars in this area. I always tell my students - if you don't know what to do, just start typing. As you say the relationship of the graphical command processes and the CLI is stellar.

But - one big shout back to Grasshopper that NOTHING ELSE compares to - if you hold "ctl-alt" and click-hold on a component on the canvas, it opens up the library tab where that component can be found and puts a big arrow pointing to a big circle around it. It's one of the most shockingly useful commands in any program, ever. I've had rooms of students audibly gasp when shown that.



I would love something to better visualize the software I'm working with. However...

1: It would need to have dynamic abstraction. Sometimes I need to see very top-level flow and organization, sometimes I need to dig down a bit. I'll essentially never need the lowest level like IF statements as the OP mentions, but I'll definitely need multiple different levels.

2: It would need to have some awareness of the intention of the software, its history, and it would need answers to some questions. It would need to be as competent as a developer experienced with writing the software to really give me any useful insights. It can't get confused because there are some macros sprinkled in, 3-4 different languages interacting, or parts of the software only used when certain environmental variables are set. It needs to handle basically all of the edge cases. If it gets "confused", it'll end up taking more time than it saves.

#1 is doable, but hard. #2 is basically magic. (For now)

You can claim machine learning can do it, but there's nothing close to that sophisticated and reliable in existence.



People have mentioned a bunch of successful visual programming applications, but one that I've been thinking a lot about lately is Figma.

Figma has managed to bridge the gap between designers, UXR, and engineers in ways that I've never seen done before. I know teams that are incredibly passionate about Figma and use it for as much as they can (which is clearly a reflection of Figma themselves being passionate about delivering a great product) but what impressed me was how much they focus on removing friction from the process of shipping a working application starting from a UI mockup.

I think Figma holds a lot of lessons for anyone serious about both visual programming and cross-functional collaboration in organizations.



Most times in my career that I've seen people talking about visual programming, it's not about the developers - it's about lowering the bar so that (cheaper) non-developers can participate.

A Business Analyst may or may not have a coding background, but their specifications can be quite technical and logical and hopefully they understand the details. The assumption is that if we create our own Sufficiently Advanced Online Rule Engine they can just set it all up without involving the more expensive programmers.

This is discussed a bit in the first paragraph, but I just wanted to reiterate that most systems I had to deal with like this were talked about in terms of supplying business logic, rules, and control flow configuration to a pre-existing system or harness that executes that configuration. The "real" programmers work on that system, adding features, and code blocks for anything outside the specification, while the other staff setup the business logic.

It works to some degree. I think things like Zapier can be quite good for this crowd, and a lot of mailing list providers have visual workflow tools that let non-programmers do a lot. A DSL like Excel formulas would be in this group too, since it operates inside an existing application, except that it's non-visual. Some document publishing tools like Exstream (I worked with it pre-HP, so years ago) did a lot in this space too.

I did read and appreciate the whole article, I just noticed this part for a reason - I'm working on a visual question builder again right now for a client who wants to edit their own customer application form on their custom coded website, instead of involving costly programmers. It always ended poorly in the past at my previous company, but maybe it'll be different this time.



>it's about lowering the bar so that (cheaper) non-developers can participate.

I think that is a terrible approach to anything. Programming isn't that hard and without a doubt anyone who can do business analysis is mentally capable of writing Python or whatever other scripting language.

Instead of teaching people something universal, which they can use everywhere and which they can expand their knowledge of as needed, you are teaching them a deeply flawed process, which is highly specific, highly limited and something which the developer would never use themselves.

Having a business analyst who is able to implement tasks in a standard programming language is immensely more valuable than someone who knows some graphic DSL you developed for your business. Both the interest of the learner and the corporation are in teaching real programming skills.

Even the approach of creating something so "non-programmers" can do programming to is completely condescending and if I were in that position I would refuse to really engage on that basis alone.



> you are teaching them a deeply flawed process, which is highly specific, highly limited and something which the developer would never use themselves.

That kind of lock-in can be a feature from the employer's perspective. I did actual coding for years in an environment where what I learned was not very widely applicable at all, for similar reasons. I'm now happily in recovery :) But it makes it harder to leave when you feel like you lag behind where you should be in your career.

I don't think tools like Zapier are condescending. I can and have written code to connect APIs, but Zapier made some stuff way easier, and it lets people like my wife get the same stuff done with far less effort. She has no interest in learning programming. There will be stuff the tool can't do, so then the programmers can step in.

And in my prior job, many people became BAs from a coding background specifically to get out of writing code. They can do it - they don't want to. They're happier in MS Office or similar tools.



>That kind of lock-in can be a feature from the employer's perspective

And it can be a huge problem, as he has to maintain a complex visual DSL and teach it to every new employee. Locking employees in seems like a very easy way to make people miserable and unproductive.

An employer wants employers who are long term productive, giving them good tools and the ability to learn new things allows them to not hate their jobs. And an employee who knows basic programming is always an asset.

>And in my prior job, many people became BAs from a coding background specifically to get out of writing code. They can do it - they don't want to. They're happier in MS Office or similar tools.

I completely understand that. But there are definitely problems that need to be solved with programming and having people with the ability to do so can only be an asset, even if they aren't a full time developers.

In general I think it is pretty hard sell to teach someone a skill with no other applications. This is different if that person only wants to achieve a certain thing, then transferability is irrelevant. But if you want someone to learn something new, it requires for them to understand why they should learn. Programming isn't particularly hard, teaching someone a standard programming language and giving them the ability to use that in their jobs, instead of a specialized DSL is an enormous benefit.

If you came to me and told me you are going to teach me something which is totally different from what you yourself would do and a special way by which you have made something easy so that I can understand it, I would refuse. I guess that I might be projecting here, but I genuinely feel that many people would look at it the same way.



> it's about lowering the bar

I think that might be right.

I remember the first time playing with "visual" programming (kind of). It was visual basic, probably the first version.

It lowered the bar for me.

I quickly learned how to create a UI element, and connect things. A button could be connected to an action.

So then I was confronted with event-driven programming, and that exposure was basically what was taught to me.

And then the beauty of creating a UI slowed as I exhausted the abstraction of visual basic and ended up with a lot of tedious logic.

I had a similar experience with xcode on macos. I could quickly create an app, but then the user interface I created was dragged down again. It seemed to me like the elegance of a mac user interface, required what seemed like a lot of tax forms to fill out to actually get from a visual app to a working app. I really wanted to ask the UI, what dummy stuff like the app name hasn't been filled out yet? What buttons aren't connected? how do I do the non-visual stuff visually, like dragging and dropping some connection on a routine? ugh.

In the end there's a beauty to plain source code, because it seems like text is the main and only abstraction. It's not mixed in with a lot of config stuff that only xcode can edit, and probably will break when xcode is upgraded.



This actually works if it's not a generic visual programming solution, but if it's a DSL. Don't give the business people pretty graphical loops, give them more abstract building blocks.

Unfortunately that means paying the professional programmers to build the DSL, so it doesn't reduce costs in the beginning.



I think part of the problem is that coding projects can get really big - like millions of lines of code big. Not making a huge mess of things at that scale is always going to be difficult, but the approach of text-based files with version control, where everyone can bring their favorite editors and tools, seems to work better than everything else we've tried so far.

Also, code being text means you can run other code on your own code to check, lint, refactor etc.

Visual programming - that almost always locks you into a particular visual editor - is unlikely to work at that scale, even with a really well thought out editor. Visual tools are great for visual tasks (such as image editing) or for things like making ER diagrams of your database schema, but I think that the visual approach is inherently limited when it comes to coding functionality. Even for making GUIs, there are tradeoffs involved.

I can see applications for helping non-programmers to put together comparatively simple systems, like the excel example mentioned. I don't think it will replace my day job any time soon.



The issue with every one I’ve used is that it hides all the parameters away in context aware dialog boxes. Someone can’t come along and search for something, they need to click every element to via the dialog for that element to hunt for what they are looking for. I found every time the lead dev on a project changed, it was easier to re-write the whole thing than to try and figure out what the previous dev did. There was no such thing as a quick change for anyone other than the person who wrote it, and wrote it recently. Don’t touch the code for a year and it might as well get another re-write.



Yes, definitely this. I have worked for a couple years on webMethods, where programs can ONLY be created by "drawing/composing" sort of flowcharts (see https://stackoverflow.com/q/24126185/54504 ) and the main problem was always trying to search for stuff inside the "Codebase". And... another benefit of purely text-based code is that you can always run a diff-like utility and quickly zoom in on what has been changed.


I simply have to recommend Glamorous Toolkit to anyone interested in visual programming: https://gtoolkit.com

It focuses on the kind of visual programming the article argues for: Class layout, code architecture, semantics. It's one of the best implementations I have seen. The authors are proponents of "moldable development", which actively encourages building tools and visualizations like the ones in the article.



It's not tied to Smalltalk, at least not completely: the standard distribution comes with a JS and Java parser and you can use those to create Smalltalk model of their ASTs, making it look like they're just Smalltalk objects too.



This article seems focused on "how do we help programmers via visual programming", and it presents that case very well, in the form of various important and useful ways to use visual presentation to help understand code.

There's a different problem, of helping non-programmers glue things together without writing code. I've seen many of those systems fail, too, for different reasons.

Some of them fail because they try to do too much: they make every possible operation representable visually, and the result makes even non-programmers think that writing code would be easier. The system shown in the first diagram in the article is a great example of that.

Conversely, some of them fail because they try to do too little: they're not capable enough to do most of the things people want them to do, and they're not extensible, so once you hit a wall you can go no further. For instance, the original Lego Mindstorms graphical environment had very limited capabilities and no way to extend it; it was designed for kids who wanted to build and do extremely rudimentary programming, and if you wanted to do anything even mildly complex in programming, you ended up doing more work to work around its limitations.

I would propose that there are a few key properties desirable for visual programming mechanisms, as well as other kinds of very-high-level programming mechanisms, such as DSLs:

1) Present a simplified view of the world that focuses on common needs rather than every possible need. Not every program has to be writable using purely the visual/high-level mechanism; see (3).

2) Be translatable to some underlying programming model, but not necessarily universally translatable back (because of (1)).

3) Provide extension mechanisms where you can create a "block" or equivalent from some lines of code in the underlying model and still glue it into the visual model. The combination of (2) and (3) creates a smooth on-ramp for users to go from using the simplified model to creating and extending the model, or working in the underlying system directly.

One example of a high-level model that fits this: the shell command-line and shell scripts. It's generally higher-level than writing the underlying code that implements the individual commands, it's not intended to be universal, and you can always create new blocks for use in it. That's a model that has been wildly successful.



Shameless plug, but this is what we’re trying to do at Magic Loops[0].

We joke it’s the all-code no-code platform.

Users build simple automations (think scrapers, notifications, API endpoints) using natural language.

We break their requests into smaller tasks that are then mapped to either existing code (“Blocks”) or new code (written by AI).

Each Block then acts as a UNIX-like program, where it only concerns itself with the input/output of its operation.

We’ve found that even non-programmers can build useful automations (often ChatGPT-based like baby name recommenders), and programmers love the speed of getting something up quickly.

[0] https://magicloops.dev



Mindstorms is an example of what did not work. I want to provide an example of what does. BBC microbits. It has a visual programming interface that is translatable to python or JavaScript .



It seems odd to me not to mention things like MaxMSP or PD in an article like this. Arguably Max is one of the most successful standalone visual programming languages (standalone in so far as it’s not attached to a game engine or similar - it exists only for its own existence).



Those two are both primarily for real time signals and music right? That is a great domain for wires, transforms, and pipelines.

Have you ever seen them used in a different context?



Sequence diagrames (that seems not much different swimlane diagrams) are great, so much so that I created a tool that generates them from appropriately built TLA+ specs representing message exchange scenarios: https://github.com/eras/tlsd

However, while they are good for representing scenarios, they are not that good for specifying functionality. You can easily represent the one golden path in the system, but if you need to start representing errors or diverging paths, you probably end up needing multiple diagrams, and if you need multiple diagrams, then how do you know if you have enough diagrams to fully specify the functionality?

> The protocol is complicated enough for me to think that the diagrams are the source of truth of the protocol. In other words, I'd venture to say that if an implementation of the Double Rachet algorithm ever does something that doesn't match the diagrams, it is more likely it is the code that is wrong than vice-versa.

I would believe the latter statement, but I wouldn't say the first statement is that said in other words, so I don't believe this is the correct conclusion.

My conclusion would be that diagrams are great way to visualize the truth of the protocol, but they are not a good way to be the source of truth: they should be generated from a more versatile (and formal) source truth.



State diagrams are basically visual code, aren't they?

And indeed they good for specifying, being the source of truth, but like code, they (afaik) don't really work for representing interactions with multiple actors (other than by sending/receiving messages), and they don't have a time component. But you could generate sequence diagrams from them, or at least verify them.

Xstate does have some functionality for interacting with the specified state machine, but I haven't played with it a lot. The idea of generating—or at least verifying—Xstate state machines with TLA+ has come across my mind, though.



Statecharts are highly useful to represent behaviour. Sequence diagrams do not capture it as much.

Timed behaviour, like timeouts can be represented in statecharts by having transitions on time based conditions. For example an event puts the system in a 'waiting' state, and in the waiting state there is a 30 second transition to a 'fail' state unless some other event happens which pulls the system out of the 'waiting' state.

Also it provides a good indication of what behaviour is valid and what not valid and what are don't cares.

External interactions of a system can be model as the state changes of a system.

They also have 'memory', special elements that remember which substate the system was in before it jumped out of the state last time.

I recommend David Harel's very interesting paper on modelling behaviour with state charts.



Virtually everything safety critical (cars, planes, biomedical..) uses Simulink which is not shown or mentioned by this post and it works fine for very large apps.



Scratch seems to be reasonably successful in teaching kids to code [1].

But a large visual blocks program is as incomprehensible, if not more, than a pure textual representation.

Whether text or visual, the challenge for IDE's is the ability to hide detail of large codebases in a way that still preserves the essential logic and allows modifying it. Folding/unfolding code blocks is the primary available tool but its only a primitive way to reduce visual clutter, not a new visual abstraction that can stand on its own.

[1] https://scratch.mit.edu/projects/editor/?tutorial=all



I think scratch with a little more structure and lots of keyboard shortcuts would work for a "real" language.

It's really just replacing indentation with blocks of color.



Twenty years ago I was a researcher (Fraunhofer) on executable UML, especially on aspect oriented programming (AOP which was a thing back then, but never caught on). You could draw a boundary around some UML process flow, and attach an aspect to it. For example a security boundry, and then the code generated would automatically add a security check aspect for all flows going inside.

What we did find out, text is just better to read and understand. It's easier to refactor and much denser. We experimented with different levels to zoom in and zoom out for bigger programs but Visual programming does not scale (or didn't at least back then).



That was the premise of UML and the dream Rational was trying to sell with Rational Rose – that in the future, there would be no conventional programming languages, no software engineers, only architects, philosophers and visionaries wearing suits and ties, daydreaming and smoking pipes, who would be imbued with senses of self-importance and self-aggrandisement using Rational Rose and its visual language (UML) for system design and actually for every.single.thing., and Rational Rose would automatically generate the implementation (in an invisible intermediate conventional programming language as a byproduct). The idea was to obliterate the whole notion of programming as we know it today.

So the implementation in the intermediate programming language (C++) was not event meant to be readable to humans – by design. Rational Rose (the app), however, was too fat, too slow and (most importantly) buggy AF – to the point of the implementation it spat out nevery being able to work. And, UML did not meet the level of enthusiastic support Booch and Co wholeheartedly hoped for.

Whatever the reason was for Grady Booch's personal crusade against the programming and an attempt to replace programming with visual programming, it has failed and done so miserably. Today, the only living remnant and legacy is UML sequence diagrams, and even class diagrams are no longer seen in the wild.



You seem to have come to if from the wrong direction. The entire idea behind the Rational Process was that in the future, your architects would have to every 2 months or so come down from their conference rooms and talk to the *grasp* developers.

IBM had quite a hard time selling this idea. So they decided to push their marketing people outside of their target customers. That may be how they got to you.



The kinds of visualisation discussed by the article remind me very strongly of Glamorous Toolkit [0], most recently posted to HN at [1]. It’s something I’ve never really felt a need for, mostly because the software I work on is mostly very small in terms of code size (for physics research etc.). The idea is certainly fascinating, however… there are a lot of possibilities for a codebase which can introspect itself.

[0] https://gtoolkit.com/

[1] https://news.ycombinator.com/item?id=33267518



Merging source code line by line is a solved problem. Merging visual code/graphs/graphics is often simply impossible. Also versioning and simply showing diffs become difficult problems with visual programming. That is why visual programming will never scale beyond small toy projects maintained by a single developer.

That said, I agree that visualising your code base might give additional insights. However that is not visual programming, that is code visualisation.



Maybe it's not impossible but just quite difficult? I use Houdini 'Vops' sometimes and I could imagine a tricked-up diff could be made for it (especially since it translates to vex) but you're certainly right that it's a hard problem in general!



Well visual programming is standard in Unreal projects and they definitely scale beyond toy projects with a single developer. Although Excel is the most popular visual 'programming language', the second most popular is surely Blueprint.



"if you connect to source control within the editor you can at least diff blueprints to compare changes. though it's currently not possible to actually merge them."

https://www.reddit.com/r/unrealengine/comments/1azcww8/how_d...

So it seems like basic functionality like merge is still missing from visual coding in Unreal.

But yes, there were also huge projects before the invention of distributed version control systems. But that wasn't a good world and why go back?

P.S.: Have you ever tried to merge two different excel files?



Yes, but you don't need merge to scale up things like game projects. You can just carefully partition the work between people. Perforce supports file locking for this reason. And, a lot of merge conflicts in software are thanks to the use of symbolic naming.



This is very true. Line merges do not always work well. There used to be a tool called SemanticMerge which was able to handle merging code cleanly even when the code had been refactored. It saved me quite a bit of work a handful of times (before it was taken away because the company needed a value add for their paid version control software).



The fundamental problem in visual programming is that it limits you to geometry (practically to 2D euclidian space). Most non-trivial programming problems are spaghetti by nature in such spaces.



That is not a problem, and for sure not a fundamental one. The textual representation is very limited, it's actually 1D with a line breaks helping us read it. 2D gives a lot more possibilities of organising code similar to how we draw diagrams on a whiteboard.



The problem with visual programming is it abandons the fundamental principle of language, whereby to connect two objects it is necessary only to speak their names, in favor of the principle of physicality, whereby to connect two objects it is necessary that they be in physical contact, ie. to be joined by a wire.



> only to speak their names

> in physical contact, ie. to be joined by a wire.

I don't really see how that is different, in any given language the name alone is not enough to refer to the object, in general case you have to import it. For me the process of name resolution and connecting by a wire is the same thing with different representations.



> to connect two objects it is necessary that they be in physical contact

I can imagine a way to connect an object to another by selecting the latter's name from a drop-down menu of defined objects. A visual equivalent of a function call.



Power of text or other symbols is that they aren't spatially bounded. That's why it works even in "1D".

There are probably some possible usability gains from adding dimensions. E.g. node-type "programming" in Blender is quite nice. But for general purpose progamming it's hard to see how we'd get rid of symbolic representation.



Specifically, textual programs use symbols to build a graph of references between computations, where the average visual language tries to use explicit lines between blocks. But the reference graphs of non-trivial programs are often decidedly non-planar, which only becomes a problem when you try to lay them out on a plane.



Why does laying out code on a line not cause a problem with spatial reasoning but a plane would? Are we somehow incapable of applying spatial abstractions when we move up into a higher dimension than 1?



The spatial reasoning on reading code does not happen on the dimensions of the literal text, at least not only on these. It happens in how we interpret the code and build relations in our minds while doing so. So I think that the problem is not about the spatial reasoning of what we literally see per se, but if the specific representation helps in something. I like visual representations for the explanatory value they can offer, but if one tries to work rigorously on a kind of spatial algebra of these, then this explanatory power can be lost after some point of complexity. I guess there may be contexts where a visual language may be working well. But in the contexts I have encountered I have not found them helpful. If anything, the more complex a problem is, the more cluttered the visual language form ends up being, and feels overloading my visual memory. I do not think it is a geometric feature or advantage per se, but about how brains of some people work. I like visual representations and I am in general a quite visual thinker, but I do not want to see all these miniscule details in there, I want to them to represent what I want to understand. Text, on the other hand, serves better as a form of (human-related) compression of information, imo, which makes it better for working on these details there.



> If anything, the more complex a problem is, the more cluttered the visual language form ends up being, and feels overloading my visual memory

I feel like you are more concerned about implementation than the idea itself. For me it's the opposite - I find it's easier to understand small pieces of text, but making sense of hundreds of 1k lines files is super hard.

Visual programming in my understanding should allow us to "zoom" in and out on any level and have a digestible overview of the system.

Here is an example of visual-first platform that I know is used for large industrial systems, and it allows viewing different flows separately and zooming into details of any specific piece of logic, I think it's a good example of how visual programming can be: https://youtu.be/CTZeKQ1ypPI?si=DX3bQSiDLew5wvqF&t=953



As jampekka put it, text isn't trying to use spatial abstractions, it's using the (arguably more powerful) abstraction of named values/computations. Hard to think about? Yes, there's a learning curve to say the least. But it seems to be worth it for a lot of cases.



Text doesn't use spatial abstractions.

The problem with spatializing complex relationsips becomes very apparent when one tries to lay out graphs (as in nodes-and-edges) graphically. Unless the relationships are somehow greatly restricted (e.g. a family tree), the layouts become a total mess, as the connectedness of the nodes can't be mapped to distances and the edges by necessity have to make a lot of crossings on top of each other.



Writing is based on speech, which is one-dimensional. Most programming is actually already highly two-dimensional thanks to its heavy line orientation.

But most visual programming isn't trying to be a kind of "2D orthography" for language, it is trying to be a "picture" of a physical mechanism.



That 2D orthography idea is my pipe dream. Any time I am writing several similar lines of code but with variables having different length I always want my IDE to be acknowledged that some 1-symbol operators would be looking so nice if aligned in one vertical line.



So why not do 2D visual programming with access to symbols that are not spatially bound? Is there any reason why a 2D plane forces the programmer to think in terms of a plane that doesn't also apply to a 1D line of text?

It seems to me that reducing the frequency with which programmers have to drop into non-spatial symbols would be beneficial even if there are still use cases where they have to.



With text/symbolic representation I can describe any amount of dimensions in a super dense way and physicist/mathematicians are doing that, software devs as well because most software is multidimensional.

You do have graphs in mathematics but all the maths I see is about describing reality in really dense couple of symbols, compressing as much of the universe as possible to something like E=mc^2.

Graphical programming representations go the other way - it actually tries to use more bits to describe something that can be described in less bits - many more bits.



Mapping to a plane doesn't help you understand how state changes occur over time, or what is the over-all state of the state-machine is.

The only time I've seen visual programming work is when the state is immutable. How-ever it requires a major paradigm shift how one design, develop and test their programs.



it's pretty basic topology - embedding versus immersion. you cannot embed anything but the simplest software in a 2d plane. you end up having to endlessly try to structure things to minimize line crossings, make multiple diagrams of the same code to capture different aspects of it, or otherwise perform perversions just so it fits on a page.

And I live through this. most of my early career was DoD related in the early '90s, when things like functional analysis was all the rage. endless pages of circles and lines which were more confusing than helpful, and certainly didn't actually capture what the software had to do. Bever again.



I agree with you how-ever the elephant in the room is a image or topology doesn't predicate anything (A predicate is seen as a property that a subject has or is characterized by). That is the main delineation between them, and why SPO (subject predicate object) is used universality by all modern languages - abet some have different svo ordering but I digress.

The next major draw-back with visual programming is they don't explicitly convey time. You have to infer it via lines or sequence of diagram blocks. Were in a programming language you have sequential order of execution eg. left to right, top to down of of the program flow and stage change. If you attempt to achieve the same with a stateless event driven bus or message queue, you end up having to embed the sequential control flow into the event payload itself.



I would vouch a different take, visual programming makes it quite clear the mess of programs some people create when they don't follow modular programming.

Complex flows can be packaged into functions and modules representations, instead of dummping everything into a single screen.



The spatial (usually largely 2D in IC) constraints are a huge limitation for circuit design. I'm quite sure chips (or breadboards) wouldn't be designed like this if the physical world wouldn't force the geometry.



I meant more that the very concept of an IC is a good idea, and like a good abstraction in programming.

I think pjmlp was getting at is that when using visual programming, a lot of people seem to turn off (or not cultivate) the part of the thought process concerned with creating good abstractions, despite it at least being possible to do so.



And yet IDA and Ghidra use that same 2d representation structure for basic blocks (e.g. https://byte.how/images/ghidra-overview/graph-edges.png ) showing code flow between the blocks

I have had better-than-average success representing the high level sequence of computer-y actions using sequence diagrams, and suspect strongly my audience would not have the same comprehension if I used pseudocode or python or C++

Where I think the anti-visual programming audience and I can agree is the idea of a standard library, since a bunch of diagrams showing a "length" message being sent to a String object is for sure the wrong level of abstraction. In that way, I'd guess the DSL crowd would pipe up and say that is the same problem a DSL is trying to solve: express the code flow in terms of business nouns and verbs, and let the underlying framework deal with String.length nonsense

I've seen a hybrid approach to this in a few testing frameworks, such as Robot Framework <https://robotframework.org/robotframework/latest/RobotFramew...>



The reason it works out so well (contrary to many people's intuition) is that most programming is done in structured languages or in a structured-style these days. This significantly reduces the number of entry points (typically down to 1) for any block of code, though it can still have many exit points. Unless someone abuses something like Duff's device or uses goto's to jump into a block (and the language permits it), the flow diagrams like in the linked image end up being pretty tidy in practice.



I got the Apple Vision Pro with the hope to tinker with such things. Is one more dimension enough to "unlock" visual programming? I don't know, and unfortunately not many seem interested in exploring it.



I don't think extra dimensions help. Even simple functions have easily tens of interconnected references and representing these spatially is gonna be a mess even in higher dimensions.



I personally wont ever be interested in VR until it has "generic computing" as a major feature.

like automatically creating a 3d world showing pipes as your internet connections, some kind of switches and buttons and things as every single thing you can do with your computer including complicated ass command line and GUI windows.

And all the tools necessary to reduce or increase the complexity of it as I see fit as a user



> Visual languages trade named links for global wiring

Existing visual programming langs can definitely do "named links". A lot support named function-like blocks which are another form of avoiding wires.

> which is very cluttered for serious problem solving

This clutter is also problematic in textual programming, and is the reason abstractions and programming structures are used. Perhaps the hint here is that we need better ways of representing abstraction in visual programming.



Mathematicians like to use parametrizations to measure "how |any dimensions" something has. If you need two indexes to traverse it (x and y), it's 2d, if a single index works best to describe it, it's 1d.

Another way to think of it is "there is semantic meaning to 'the character to the right/left of this one', but is there to 'the character above/below this one'?" In most programming languages, there isn't at all.



How is `if` related with creating a new line? And how does new line make something 2D? If code was 2D you could write code anywhere in your document without juggling around spaces and newlines



You could argue it's 1d, actually, since sequence is fundamental, not positioning on the x axis.

At any rate it's (mostly+) categorically different from what visual programming attempts. Code must be read, comprehended, and a mental model built. Visual programming is designed to give a gestalt spatial intuition for code structure -- a different kind of comprehension.

+Indent and spacing between functions/methods does count as a tiny bit of visual programming IMO



I think we need functional visual programming.

It seems to me like referential transparency and pure functional composition would be a much cleaner way to visually compose functions into larger functions (and eventually programs).



In banking, Camunda is increadibly popular.

You model state changes visually. The model - the diagram with boxes and arrows - IS the code. And then the boxes can have additional code logic in them.

It's a giant pain to work in and debug. But the execs love it because they want to see the diagrams.



I have another take on visual programming

We need programming environments that can understand both textual & visual code.

We need a new primitive which I call the visual-ast

encode the AST in html via data attributes, and have a system to ignore irrelevant html nodes, giving space for rich UIs to be developed in place of typical AST nodes.

eg.

  // textual
  1 + 2
  
  // ast
  {
    kind: "plus",
    lhs: { kind: "int", value: 1 },
    rhs: { kind: "int", value: 2 }
  }
  
  // visual-ast
  
  
2
What you can do this with this AST is create rich UIs which contain the appropriate `data-attr`s, (ignoring the other elements), and now you have a generic system for interweaving textual & visual code.


The problem with most visual programming is that most platforms avoid making any tradeoffs.

A good visual programming tool should abstract away complexity but it can only achieve that by reducing flexibility.

If you're going to give people a visual tool that is as complex as code itself, people might as well learn to code.

It helps to focus on a set of use cases and abstract away from common, complicated, error-prone, critical functionality such as authentication, access control, filtering, schema definition and validation. All this stuff can be greatly simplified with a restrictive UI which simultaneously does the job and prevents people from shooting themselves in the foot.

You need to weed out unnecessary complexity; give people exactly the right amount of rope to achieve a certain set of possible goals, but not enough rope for them to hang themselves.

I've been working towards this with https://saasufy.com/

I've chosen to focus on CRUD apps. The goal is to push CRUD to its absolute maximum with auth, access control and real time data sync.

So far it's at a point that you can build complex apps using only HTML tags. Next phase would be to support generating and editing the HTML tags via a friendly drag and drop UI.

Still, it's for building front ends. It cannot and will never aim to be used to build stuff like data processing pipelines or for analytics. You'll need to run it alongside other services to get that functionality.



My 2 €cents from a limited and outdated experience with visual programming tools:

1. Screens have limited size and resolution, and the limits get hit rather fast. The problem can be pushed away by zooming, by maybe an order of magnitude, but for a long living project growing in size and complexity, it will not be enough.

2. In text, near everything is just a grep (fzf,...) away. With the power of regex, if needed. Do the no-code folks nowadays implement a equally powerful search functionality? I had very bad experience with this.

3. Debugging: although the limited possibilities of plugging graphical items together is like an enhanced strict type safety, I'm sure that errors somehow happen. How is the debugging implemented in the visual tools?

4. To store/restore the visual model, the tool developer needs to develop a binary/textual/SQL/... representation and unique source of truth for it. I think the step from that to a good textual DSL is smaller than to a GUI. And the user can more or less effortless use all the powerful tools already developed for shells, IDEs, editors, ....

So in my opinion most of the visual programming things are wasted time and wasted effort.



Extending #2, we've developed incredibly flexible and powerful tools for editing plain text. I've found refactoring to be a breeze with Vim macros, and people swear by Sublime's multi-cursor editing. Even with a good set of hotkeys, I can't imagine a visual environment being as smooth to edit.



There’s areas it’s good for: Beaten paths, modeling time-independent structures and things that are naturally 2D. Not so great for the final solution, but handy when you need to do quick iterations. Ex. the interface builder in xcode, the node system in blender, sound synthesis…



I'm working on visual programming for Python. I created an Python editor, that is notebook based (similar to Jupyter) but each cell code in the notebook has graphical user interface. In this GUI you can select your code recipe, a simple code step, for example here is a recipe to list files in the directory https://mljar.com/docs/python-list-files-in-directory/ - you fill the UI and the code is generated. You can execute code cells in the top to bottom manner. In this approach you can click Python code. If you can't find UI with recipe, then you can ask AI assistant (running Llama3 with Ollama) or write custom python code. The app is called MLJAR Studio and it is a desktop based application, so all computations are running on your machine. You can read more on my website https://mljar.com


As a Software-Developer this article made sense, although I would want it to include a few more useful UML diagrams. Models is the keyword here to me, not "visual".
    User Feature -> Feature Model -> Architecture Model -> Source Code
Speaking from a Software-Analyst perspective, models are used througout. Many complex projects need a model of functionality, to bridge understanding between stakeholders' and developers' regarding the (agreed upon) required feature in a given problem domain. The resulting models and code should be on par.

Some buzzwords to google: - Business Process Modeling and Notation (BPMN) - Model Driven Architecture (MDA) - Model Based System Engineering (MBSE)

In theory, the developer output is a function of the desired functionality. If the functionality fits a parsable model, we should be able to transcode this into sourcecode. In a nutshell this is can be a result from adoption of MDA and/or MBSE.

In a nutshell, I believe software development should happen from models to "generate" code, that then can be augumented by software developers. Updates from a model should result in updated code.



The article mentions a couple of what I think are relevant examples: state machine diagrams and swimlane diagrams. The author makes a great point in the beginning, how programmers don't need to visualize iterator or branch logic code.

Language structures are what they are, we all learn them and know them; they're the tools we're familiar with and don't need a diagram for. What changes all the time (and what made the swimlane and machine diagrams relevant) is the business logic. This is the part that continues to evolve, that is badly communicated or incompletely specified most of the time, and that is the part most in need of increased visibility.

In my experience, this relates closely to what's really important in software development -- important to those who pay the software developers, not to the developers themselves.

I've seen lots of architecture diagrams that focus on the pieces of technology -- a service here, a data bucket there, etc etc. I think that reflects the technical person's affinity for and focus on tools and building blocks, but it puts the primary motivations second. To me, the key drivers are the "business" needs - why do we need the software to do the things, who will use it, and how.

In my work, I try to diagram the workflows -- the initial inputs, the final product, and the sequence of tasks (each with some intermediate ins and outs) in between, noting which roles / user personas execute them. A kind of high-level UML diagram with both structural and behavioural elements. I find that it raises key questions very early on, and makes it easier to then continue laying down layers of increasing technical detail.

If I were to design a visual language, this is where I would start - formalizing and giving structure to the key concerns that motivate and inform software design, architecture and development.



Problem is that “those who pay developers” don’t care to do it on their own. Heck bunch of business analysts don’t care about going down into gritty details - so even if you standardize stuff it won’t shorten the loop.

Only thing it will do it will rob developers of flexibility and level of control they can fix up any “management business grand plan”. Just like all those low code platforms do.

For me low code and visual programming platforms are the same - good ideas for someone who doesn’t understand technical details.



"Language structures are what they are, we all learn them and know them; they're the tools we're familiar with and don't need a diagram for"

If I have a nested construct of various control flow together with some ternary operators, I do wish for something more visual. Or trapped in paranthese hell. Yes I can read that. But it takes energy to decode it.

if while (x



So I don’t see problem with just doing quick rewrite of the code to make it cleaner.

With GIT you can commit it locally and never publish not to offend team mates :). With IDE I can reformat text and refactor it in matter of seconds. But you can rewrite it enough to understand it.

For graphical representation there are no tools that can help you and also graphical representation will most likely be only worse.

联系我们 contact @ memedata.com