问 HN:您希望人们更多地了解哪些不受欢迎的技术?
Ask HN: What are some unpopular technologies you wish people knew more about?

原始链接: https://news.ycombinator.com/item?id=38499134

尽管没有明确提及,但 OpenBSD RECIPE 中暗示了 vidir 和 qmv 在 OpenBSD 上使用。 虽然 moreutils 有一个 OpenBSD 端口,但不幸的是,renameutils 还没有。 这种限制意味着用户必须自己编译它,这对于一些不熟悉该过程的用户来说可能是一个额外的挑战。 不过,本文重点介绍了这些工具,供任何寻求通用实用程序套件程序或涉及 Emacs 的复杂重命名方案替代方案的人使用。 这两种工具都提供了使用可自定义命名约定批量重命名文件的有效方法,无需在 Emacs 中进行大量编辑或编写复杂的脚本。

批量重命名的另一种选择是“prename”,它可以在各种 Linux 发行版以及 Arch Linux(AUR 软件包称为“prename”)中找到。 Prename 允许您定义用于重命名文件的复杂命名约定,但其界面是基于控制台的。 但是,与 vidir 和 qmv 一样,prename 也支持通过指定编辑器(例如 emacs 或 sed)编辑重命名的文件。 此外,prename 支持递归重命名和批处理操作。 在 Debian 系统上,可以在 deb.debian.org 存储库中找到 prename 包。 例如,要在“/path/to/directory”中搜索“.jpg”文件,选择相应的 .pre 文件扩展名,并根据指定模式更改每个 .jpg 文件名,带有 prename 的 shell 会话可能如下所示: ````嘘 $ cd '/路径/到/目录' $ 名称 --batch $ 选择“创建新名称” $ 过滤文件(正则表达式)*.jpg | 好的 | 输入执行 $ 模式(例如:foo_{numero}bar.jpg)*foo*{1}_numero{pad=3}{_.jpg})bar_ $ 保存配置 (y/n) y $ Edit using (* 表示默认编辑器 sed) s | 好的 | 输入执行 ```` 执行上述步骤后,“/path/to/directory”中选定的“.jpg”文件将按照指定的重命名模式进行处理。
相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What are some unpopular technologies you wish people knew more about?
270 points by aredirect 1 day ago | hide | past | favorite | 374 comments
Couple of things I like

- tarantool https://www.tarantool.io/en/

- rebol/red-lang https://www.red-lang.org/

- U++ : https://www.ultimatepp.org/

- lazarus: https://www.lazarus-ide.org/

- fasm: https://flatassembler.net/











Analog computation.

I don't mean just vacuum tubes or even electronics at all. Mechanical analog computing is insane when you get down to it. You have special shapes that move against each other and do calculus.

We make these mechanical models as analogs of more complex physical systems. We can turn huge calculations into relatively simple machines. That we can roll two weirdly shaped gears together and get an integral out says to me something very profound about the universe. I find it to be one of the most beautiful concepts in all of the sciences.

What's even more wild is that we can take those mechanical analogs of physical systems and build an electronic analog out of vacuum tubes. That vacuum tubes work at all is just completely insane, but it's some absolutely beautiful physics.

And yes, there are equally beautiful problems that can only be solved in the digital domain, but it just doesn't speak to me in the same way. The closest thing is the bitwise black magic like fast inverse square root from a special constant and some arithmetic. Besides, that's more a property of number systems than it is of digital computation.

I understand how and why digital took over, but I can't help but feel like we lost something profound in abandoning analog.



Analog tide-predicting machines are fascinating.

The tide height is a function of the earth/sun/moon systems. Earth and Moon aren't at a fixed distance from eachother, and neither is the sun, so every day is a unique tide but you can predict the range.

The analog way to do it is to make a gear for each point of data in the system and synchronize all their gears. Then you use them all to rotate one final gear, which will show you the prediction for the time you've chosen.



I used to know nothing about Lord Kelvin except he said things like "It seems, therefore, on the whole most probable that the sun has not illuminated the earth for 100,000,000 years, and almost certain that he has not done so for 500,000,000 years"[1] and allegedly "everything which can be discovered, has been discovered"; until last year's Veritasium video on YouTube[2] about analog computers, and learned he invented tide-predicting analog computers to "substitute brass for brains" and add sinusoidal curves, and a mechanical integrator to separate out individual sinusoidal frequencies from the sum.

[1] https://zapatopi.net/kelvin/papers/on_the_age_of_the_suns_he...

[2] https://www.youtube.com/watch?v=IgF3OX8nT0w from about 3 minutes.



I think this is a great example. It's pretty easy to mentally connect the orbit of a planet with the ratio of gears.

Once you really understand how these systems are an analog of a physical problem, everything makes so much more sense



What are some... surprising and relatively easy ways to feel the magic of analog computation?


In ML/AI space, at least some people thing analog computing is the next big thing, https://www.forbes.com/sites/karlfreund/2021/09/23/ibm-resea.... It hasn't taken off, yet, though.


I know, I've had my eye on this topic for a while.

Honestly it seems like a perfect application. Neural networks are analog systems. An analog computer can represent neurons very accurately and the entire network is inherently parallel, for free!

I can't wait to see what comes out of this research



My research was in this direction. We already know that these analog neural chips could be orders of magnitude faster than digital equivalents. There is also a lot of military research going on in this area for few decades. However architecture innovations are much faster on software level and dedicated hardware approaches have not been able to catch up. Once things slow down on software level, slowly hardware llms could become the norm.


I don't venture too far into analog electronics so I don't know for sure, but it seems like an analog FPGA type system could really take off.


Cypress has been doing something like that for a long time now. Reconfigurable Analog electronics is an interesting field.


I agree.


Yes!

Also, most people don’t know that the word „Analog”, as in „analog circuits” comes from „analogy”.



I didn't make that connection until my late 20s and when I finally did, it radically changed how I look at and understand analog systems.

In today's world, we still build analogs, we just coerce them into strictly numerical, digital models. I don't know if you can call it better or worse, but digital is definitely less magical and wondrous than mechanical analog systems.



Veritasium did a very nice video about the analog tide computers: https://www.youtube.com/watch?v=IgF3OX8nT0w


Nature, almost completely analog, has been around a thousand times longer than humans. How many times has 'evolution' used digital methods to accomplish something? Perhaps we've chosen to switch to digital because we're in a hurry and its easier ... in hopes of, some day, asymptotically approaching the advantages of analog.


The main reason is that digital computers are so incredibly, overwhelmingly more flexible than analog. Analog computers are (generally) bespoke single-purpose devices. It really isn't too far off to imagine analog computers as code made physical, with all that entails.

Imagine writing a program if every time you wanted to change something you had to cut a new gear, or design a new mechanism, or build a new circuit. Imagine the sheer complexity of debugging a system if instead of inspecting memory, you have to disassemble the machine and inspect the exact rotation of hundreds of gears.

Analog computing truthfully doesn't have enough advantages to outweigh the advantage of digital: you have one truly universal machine that can perform any conceivable computation with nothing but pure information as input. Your application is a bunch of binary information instead of a delicate machine weighing tens to hundreds of pounds.

Analog computing is just too impractical for too little benefit. The extra precision and speed is almost never enough to be worth the exorbitant cost and complexity.



DNA is digital. I think crucial digital feature is ability to have exact result from imperfect components, especially important for self-replicating systems. Instead of having calculation that is always off by 1%, you can have perfect result 99% of the time. And you can improve MTBF by stacking error correction on top of it, without necessarily having to improve manufacturing tolerances.


You can have a look at that pinball machine. You will be amazed. https://youtu.be/ue-1JoJQaEg


Electronic analog computing is also still being researched, eg https://arxiv.org/abs/2006.13177 ("Analog multiplication is carried out in the synapse circuits, while the results are accumulated on the neurons' membrane capacitors. Designed as an analog, in-memory computing device, it promises high energy efficiency")


I agree. I remember climbing into the turret of the USS Massachusetts and playing with the ranging computer. It was just impressive that a geared device could do pretty complicated math in real time.


Like Soviet water computers?


Look around on YouTube. There's some fascinating videos from the 1950s on US Navy mechanical firing computers.

These machines can calculate ballistic trajectories with incredible accuracy, accounting for the relative motion of the ships, wind speed, and even the curvature of the earth. Those calculations are not at all trivial!



Yes. They are like sliding rules on steroids.


Like naval mechanical targeting computers that use clever cam and gear arrangements.

https://youtu.be/s1i-dnAH9Y4?si=oHHJGRqnFx-ydQu1



Hah, this is the exact video I was referring to in the sibling comment. This is what really captured my imagination with regard to mechanical computers


NNCP, from http://www.nncpgo.org/

It's a protocol/tool for async file transfer, built for disconnected/intermittent connectivity amongst known parties (trusted friends as p2p), allowing even for sneakernet-based file transfer.

It's started as a modern take on usenet, but it boggles my mind how cool it is:

Want to send a TV Series to your friend? send it via nncp, and it will make it through either via line-based file transfer (when connection allows, pull or push, cronjob, etc), or even via sneakernet if there is "someone going that way".

Comms priority system lets you hi-prio message-checking via expensive network link vs bulk file transfer using trunk lines later.

It even can be configured to run arbitrary commands on message receive, to allow indexing/processing of files (like a ZFS-receive hook, mail/matrix ingestion...)

See all the usecases: http://www.nncpgo.org/Use-cases.html

As with many of these cool techs, I just wish I had a good reason to use it =D



Before its time.


What if its time is now?


If it was, it would be - but it’s not.


But it could be... if you believe


In case you don't fancy visiting all the links:

- Tarantool is some sort of in-memory DB with optional persistence

- Red is a programming language that has made the odd syntax decision to use {} for strings and [] to define scopes

- U++ is one of those all-encompasing C++ frameworks like QT

- Lazarus is a Pascal(?) IDE

- And FASM is a toolkit for building assemblers

I'm struggling to find the common thread across these links, apart from the OP probably being an enthusiast of obscure programming languages



Red is the spiritual successor to Rebol, from which its syntax comes.

http://www.rebol.com/

https://en.wikipedia.org/wiki/REBOL



Lazarus is an Object Pascal IDE modeled after the old versions of Delphi.


Thank you for summing it up. The common is things from developer's perspective I feel they should be checked out (not just programming languages)


I guess I'm looking for a hint as to why this selection of items is particularly interesting to you. These cover a pretty wide spread of topics, and for folks who aren't well versed in each topic, they might be better served by evaluating the standard option in that field (Redis, Qt, etc) before they dive into the weird alternatives


I appreciate your comment, I've interests in so many things, specially RAD systems like lazarus, delphi, gambas (I was from that era) and I tinkered with Rebol many years ago, and the experience was quite "unique". Tarantool I had to work with in previous projects (it offers a lot, but very unknown)


Most people know about MediaWiki even if they don't realize they do, because it powers Wikipedia, but I wish more people used it for documentation.

You can create highly specialized templates in Lua, and there's a RDBMS extension called Cargo that gives you some limited SQL ability too. With these tools you can build basically an entirely custom CMS on top of the base MW software, while retaining everything that's great about MW (easy page history, anyone can start editing including with a WYSIWYG editor, really fine-grained permissions control across user groups, a fantastic API for automated edits).

It doesn't have the range of plugins to external services the way something like Confluence has, but you can host it yourself and have a great platform for documentation.



Mediawiki is huge and very complex. Why not something more simple like instiki?

Personally I would prefer a wiki with git backend. I wrote one [1] but I dont recommend using it.

https://github.com/entropie/oy



How about docusaurus and tinasaurus? The latter is based on TinaCMS.

[1] Docusaurus:

https://docusaurus.io/

[2] Tinasaurus:

https://github.com/tinacms/tinasaurus



This arent wikis..?


Fossil, the bespoke VCS used by sqlite includes a wiki & web server out of the box. It's not normally what people think of in this domain but I've used it for that purpose and it works great for it. https://fossil-scm.org


It is a PITA from an ops point of view unless you use vanilla with no extensions. Each upgrade tends to break a bunch of extensions and you have to hunt around for solutions.


Isn't that only a problem if the extensions you use are third-party? If you use 100 different extensions, but they're all ones Wikipedia uses too, won't you be fine?


Probably.


Like any documentation system, its success depends on its audience.

As an administrator, I wish MediaWiki had a built-in updater (bonus points if it could be automated).



> As an administrator, I wish MediaWiki had a built-in updater (bonus points if it could be automated).

I get that by using the container distributions. I just mount My LocalSettings.php and storage volumes in the appropriate places and I get a new version.

And since I run on ZFS and i take a snapshot before updating if something goes wrong I can rollback the snapshot, and go back to when stuff just worked (and retry later).



I think it's passe. These days I'd suggest something comparable to Notion.

https://www.reddit.com/r/Notion/comments/16zon95/are_there_a...



Nix package manager's `nix-shell` is something I wish more people knew about. Nix is gaining some popularity, but people often think of using it has to be a really big commitment, like changing your Linux distro to NixOS or replacing your dotfiles with a Nix-based one (using the Nix package manager).

What I wish more people knew was that you don't need to do those things to get value from Nix. Create project specific dev shells that install the packages (at the correct versions) to work with that project can almost replace 90% of the docs for getting setup to work on a project.



Have you tried compiling software with a nix shell? It gets linked to the Nix store. Needless to say it was a frustrating revelation.


The missing link for me is nix-copy-closure, which I learned about from a post by mitchellh: https://mitchellh.com/writing/nix-with-dockerfiles

conceptually a game changer for me. In practice it's far from a silver bullet (because every language prefers its own package management so you still have to manage those), but when it works it's quite magical.



You can patchelf to link to the host system libraries instead, or some projects can statically compile (inc musl) with less drama than usual, since your cross compilation toolchain can be part of your nix-shell.


Isn't this what you want? You are reliably linking to the specific dynamic libraries you are compiling against.

Or was the issue that you expected them to be portable? Or use commonly known dynamic library locations?



It was just surprising, is all. When I use use application from a nix shell, it pretty much always works the way I think. The compiler experience was very jarring, but yes I understand why it works the way it does.

I was more or less pointing out the UX issues with Nix that end up turning many people away.



There is definitely a learning investment in order to write good Nix expressions. But, if you write a good nix shell expression for your project, other devs should be able to jump in without really needing to understand those Nix expressions and still get a working environment.


Sometimes it matters, many times not.

For example I tried to run pip install yesterday on MemGPT on Nix.

It failed with a C++ error because they use miniconda.

I just created a nix shell with python, pip, etc and ran the pip install command.

Things work fine.



Oh God miniconda is a horrible piece of software on Nix.

I fell down the Nix rabbit hole, and miniconda was one of the worst things to get working. My first pass used an FHS environment, but eventually I just got the environment.yml file working in micromamba and used that instead. Except micromamba ships it's own linker that I had to override with SHAREDLD, or some random python c++ dependencies wouldn't compile correctly.

I love Nix, but my list of complaints is a mile long. If you want to do anything with opengl in nix, but not on nixos, just give up. NixGl just doesn't really work.

Good luck getting something like Poky (Reference project for Yocto) running in Nix. The only working example puts it in an FHS environment, which uses bubble wrap under the hood. But then because you're in a container with permissions dropped, you can't use a vm. The solution I see in the support forums is roll your own alternative FHS environment based on something else.

/Rant



Yes, this is where I am at. Used it for over a year in a DevOps role and have developed a huge distaste for it. Despite the language itself being one of the most complained about things, I didn't mind it so much. It was the mile-long stack traces, which were often wrong, and constantly fiddling with things I didn't want to fiddle with to get something working. Just ended up costing me way too much time.


Snap also has a way to open a shell inside the context of a snap package


It's good for a c or c++ project where libraries are very environment specific. But most modern languages have their own package/environment managers which makes Nix redundant.


Not really. I introduced it to our Python projects at work and it's been great. Partially because of poetry2nix, and partially because it makes it easy to include other stuff like a specific version of Redis for testing purposes. Everybody gets the exact same dev environment, reducing a ton of "works on my machine".


Presumably it also can fill the role of conda/mamba i.e. also managing C/C++ libraries in the same way in the nix environment, isolated from the system libraries?


Yep, it can lock down exact versions of those libraries as well, which is great for not mucking about with lib versions between even different Ubuntu versions, not to mention distros or macOS.


Except nix doesn't support Windows, which is best reason to use conda (having to support Windows).


virtualenv is the python way. For things like redis and other external web stuff, docker is the standard.


Sure, that works. Or I can have it all in a single shell.nix file that covers everything and is super simple to use. It's great for handing off to coworkers that don't usually use Python.


It's not simple. The nix programming language is like untyped ML. Most people aren't used to it and even if you are familiar with it it gets hella hard to read. Learning curve is huge.

One docker file and a poetry file works just as well. And is simpler. It's literally the same thing but using os primitives to manage the environment rather then shell tricks. Makes more sense to me to use a dedicated os primitive for the task it was designed to be used for.

Additionally docker-compose allows you to manage a constellation of environments simultaneously. This is nowhere near as straightforward with nix.

I love nix but being honest here. It's not definitively the best.

The biggest reason right now to avoid it is adoption. Most people won't know what to do with a shell.nix



> One docker file and a poetry file works just as well. And is simpler. It's literally the same thing but using os primitives to manage the environment rather then shell tricks. Makes more sense to me to use a dedicated os primitive for the task it was designed to be used for.

1) not just as well because docker is repeatable, not reproducible

2) not if you need GPU acceleration which is a headache in docker, but not Nix shells

> Additionally docker-compose allows you to manage a constellation of environments simultaneously. This is nowhere near as straightforward with nix.

- devenv.sh - arion - https://flakular.in/intro

> Most people won't know what to do with a shell.nix

The same was once true for Dockerfile



>1) not just as well because docker is repeatable, not reproducible

Not sure what you're saying here but most likely you're referring to some obscure pedantic difference. Effectively speaking docker and nix shell achieve similar objectives.

>2) not if you need GPU acceleration which is a headache in docker, but not Nix shells

This is true. But this is the only clear benefit I see.

>- devenv.sh - arion - https://flakular.in/intro

right. So? I said nowhere near as straightforward. This isn't straightforward. It's an obscure solution.

>The same was once true for Dockerfile

False. DockerFiles are much more intuitive because it's just a declarative config. With Nix shell it's mostly people who like haskell or OCaml who are into that style of syntax. I like it but clearly that syntax has not caught on for years and years and years. Quite likely Nix will never catch on to that level too.



I assume what they're getting at is that when you download a Docker image it'll always be the same (repeatable), but the image which is built from a Dockerfile may change even if the file does not (not reproducible).


Yes, exactly.


https://martinfowler.com/bliki/ReproducibleBuild.html and https://wiki.debian.org/ReproducibleBuilds and https://reproducible-builds.org/ says just because something is "repeatable", doesn't mean it is "reproducible".


byte for byte builds? I knew it was some pedantic thing. I'm sorry if some timestamp changes one byte of the build.


No, the biggest issue is you rebuild an image that has apt-update and 20 dependencies break.


nix is far simpler for consumption. My coworkers don't like fancy new things, and they haven't had any complaints. They don't have to dick around with half a dozen different commands to get everything set up, or bother with docker volumes/port mapping/etc. They just run nix-shell and it all works. That's all you have to do with a shell.nix file, it's very simple.

It is harder to write on average atm, but it's very much worth it to me when it comes to sharing code for development. Also, LLMs help quite a bit when writing nix.



It's the same thing for Docker. Just one command. The nix is much harder to deal with mainly because shell.nix is harder to read and write then a Docker file.

Additionally nix uses shell hacks to get everything working. Docker uses an os primitive DESIGNED for this very use case.

And additionally, because docker uses os primitives you can use docker-compose to manage multiple processes on multiple different environments simultaneously. Something that's much harder to do with nix shell.



You're seriously overestimating how hard this is, especially with poetry2nix. I like docker just fine and have used it in a development workflow and it's a pain in the ass and should never be used for that. It's great for packaging stuff up for prod, though.

Also, one man's "DESIGNED" is another man's hacks. I don't see anything wrong with how nix works. Potato/potato, I guess.



I'm not overestimating anything. it's not hard once the shell.nix is there, but everything to get to that point is waaay harder than docker. In the end once you're done you have two ways of doing the same thing with one command.

I think I know what you're getting at. nix-shell provides a fast way to get access to that specific shell environment which is a bit more annoying to do with docker. All docker needs to do is provide this interface by default and the only surface level differences between the two techniques is really just the configuration.

>Also, one man's "DESIGNED" is another man's hacks. I don't see anything wrong with how nix works. Potato/potato, I guess.

By any colloquial usage of the term "designed" in this context by any unbiased party, it's obvious Nix is more hacky by any charitable interpretation. NixOS is a layer on top of linux, containers are a linux feature. Thus creating a layer on top of linux to use existing features is the more hacky less elegant solution.

It can actually go in the other direction. Rather then use shell tricks Nix can also use containers under the hood. Overall though the API for docker is superior in terms of editing config files but not switching shells. Additionally the underlying implementation for docker is also superior.

Your main problem is with the API which is just opinionated.



>It's not simple.

Neither is using virtualenvs for Python packages with native extensions.



Yeah I know. I'm saying the nix language is not simple. With docker you don't need virtual envs.


Most language package/environment managers do not come close to giving you the guarantees that Nix does.


Two problems:

a) Unless you literally write everything in one language, you will have to deal with learning, supporting and fixing bugs in N different package/environment managers instead of just one.

b) If you have a project that uses several languages (say, a Python webapp with C++ extensions and frontend templates in Typescript), then Nix is the only solution that will integrate this mess under one umbrella.



a. Using nix in place of a package manager means dealing with libraries specific to that language. It's still managing different apis. And more potential for bugs and unforeseen problems in custom 3rd party APIs as opposed to the official one. Admit it, you hit tons of problems getting everything to work fine with python.

b. C++ is the only one that would benefit from nix here because C++ dependencies are usually installed externally. There's no folder with all the external sources in the project. Even so this can be achieved with docker. If you want you can have docker call some other scripting language to install everything if you want "one language" which is essentially what you're doing with nix.



I had to use it for a c++ project and it was one of the biggest waste of time and frustrating moments of my computing career, there were constant breakages due to glibc mismatches, Nvidia drivers and whatnot, and getting an host IDE to have semantic understanding of the paths , etc... necessary for normal completions and stuff was nigh impossible.


Yeah but other than conan it's one of the few things where you can get a sort of "project package manager" experience like npm with C++. It's not nearly as user friendly as what they have for python or nodejs.


No way. Language specific managers are terrible at managing external dependencies. Trying to get python packages to link to system libraries is terrible. Nix makes it infinitely better.


GnuPG/PGP and the web of trust[0]. A lot of things I see blockchain being used for today (e.g. NFTs) seems like it would be better solved using standard OpenPGP signatures with no backing chain.

Additionally, as machine-generated content proliferates, I think having services use something like the web of trust concept for membership would be super powerful. The problem is, of course, the terrible UX of cryptographic signatures. But I think there's a lot of opportunity for the group that makes it easy to use.

[0]: https://en.wikipedia.org/wiki/Web_of_trust



GPG is great. It also makes it really easy to encrypt environment dotfiles that safely reside in your source code repository. This is my favorite way of storing sensitive app configs. You don't even need a PGP private key in your keychain to do it. You can use a passphrase.


This sounds interesting. Have you got an example of how you do this by any chance?


I’d really like to hear more about this


There's a problem though: either you have to ban transferring NFTs (or other tokens), which makes those a lot less useful, or you need something to prevent double spend attacks (something that blockchain solves).


As a follow-up to the web of trust, I was pretty excited about Keybase and the breadth of applications they enabled, with a slick UX for web-of-trust. Pity they didn't quite succeed (got acqired/acquihired by Zoom), but it would be wonderful if something like that got another life.


Take a look at KERI.


> seems like it would be better solved using standard OpenPGP signatures with no backing chain.

Programmability though



Can you elaborate?


There would be no automated consensus over results of execution of programs that power the applications


Just curious, which would be most reliable? One entity confirms it who confirmed 1000 previous results, 2 who confirmed 500, 10 who confirmed 100 or 1000 who confirmed 1 previously?


How about the actual case: many thousands of entities, who confirmed hundreds of thousands of previous results?


Would many thousands of entities, who confirmed hundreds of thousands of previous results be preferable over hundreds of thousands of entities, who confirmed many thousands of previous results?


I think I'm pretty much ok with either one.


Magic Wormhole: https://magic-wormhole.io/ An easy and secure file transfer program. I use it almost everyday since I discovered it. Author has a very good video on it at pycon-2016: https://www.youtube.com/watch?v=oFrTqQw0_3c


I used to do some professional services work, and magic wormhole was one of the most reliable ways for me to get files to clients who's companies blocked traditional file sharing hosts like Dropbox and Google Drive.


Check out croc, I've been using it for years, and it works pretty great too!

https://github.com/schollz/croc



wormhole-william[1] is a nice version in Go, compatible with the original version, with an easy to install release binary.

[1]: https://github.com/psanford/wormhole-william



Curious if there’s any web based implementation yet? I guess it’s more of a hassle to ask the receiving party to open up the sending session when you can just provide them an asynchronous link to a hosted file (Dropbox, mega, etc)


seems like its down




Not sure if you're looking for things as "trifling" as programming languages, but I do wish more people knew about Nim. It's fast, statically typed, reads more or less like Python, has a great effect system, etc. It's a joy to use. I've been working through writing an interpreter in it: https://youtu.be/48CsjEFzyXQ


I subscribed to your channel, power to you to finish up this series :). I love Nim, I even wrote a book on it! https://xmonader.github.io/nimdays


Thanks! I plan to record many more videos. Had some unplanned construction going on in my house so my recording setup is unavailable for a bit. As soon as it's done in a few weeks, I'll put out more videos.

Your book looks great, will check it out.



Nim should be more popular, but it seemed to take some time to get started properly. It's now far more ready for serious use. Python also took some time before it took off, so there's hope.


Just so you know, the call for speakers for the 2024 Carolina Code Conference (polyglot) will open January 1.

A Nim talk would be a great fit for the event.



I have a handful of Nimble packages. Lovely language, though I haven't done much with it recently. I wish it were easier to sell people on style agnostic syntax.


More power to nim. It just needs better tooling, IDE support.


I was using Nim for some of last years Advent of Code problems. I was mostly liking the syntax. Was a bit bother by the standard library have a snake case and camel case reference for each function (if I'm remember that correctly).

At the time nimble also required me to have NPM to install the the Nim package manager, Nimble. This was not ideal, but looking at [the nimble project install docs](https://github.com/nim-lang/nimble#installation) it seems like it is now package with the language.

Might try dusting it off for some AoC puzzles this year :)



I believe the whole language is "style insensitive" for variable names. So it's not just a feature of the stdlib.


Are you serious?


Yes. It’s so you can maintain a consistent style in your code base even if your dependencies use different styles. Nim has excellent C/C++ interop and it’s relatively common to interact with C or C++ symbols directly, and being able to do this without needing to adopt the dependency’s style or wrap everything is nice.

In python, for historical reasons the logging module uses camelCase while most other modules use snake_case, so it isn’t really possible to use the logging module and maintain a consistent style. This is a non-issue in Nim.



The downsides of this approach are unfortunately that it makes wrapping certain low-level libraries an absolute pain in the ass (especially anything to do with keyboards). But overall it's a non-issue, tooling recognizes both styles and you don't notice it.


Nim 2.0 changed the default to treating style mismatches as warnings.

E.g. it's something to check but not an error. You can easily set a config to make them an error or ignore them.



Cool, that definitely sounds like a welcome improvement.


There's also `atlas` that was released with Nim 2.0.

http://nim-lang.github.io/Nim/atlas.html



I hate nim for depriving me of the joy to use tabs instead of spaces. It's just... unreasonable.


Set tabs to 2 spaces in your editor.


The Arcan display server is a really cool idea. Even if it doesn't manage to get popular, I think there are ideas here that we could mine to use them in popular programs.

- https://arcan-fe.com/2022/10/15/whipping-up-a-new-shell-lash...

- https://arcan-fe.com/2021/04/12/introducing-pipeworld/

- https://arcan-fe.com/2020/12/03/arcan-versus-xorg-feature-pa...

- https://arcan-fe.com/2021/09/20/arcan-as-operating-system-de...



This. A hundred times this. The Cat9 stuff alone is so far ahead of what some have thrown millions at cut and paste cookie cutter things like Warp yet that is not even close to what was just presented as a fun thing.

The latest EU funded 'a12' things are also soooo high concept but not fever dream.



Sphinx [1] gets my vote. It's the docs system that powers most sites in the Python ecosystem so it probably looks familiar to you.

I call it a docs system rather than static site generator because the web is just one of many output targets it supports.

To tap into its full power you need to author in a markup that predates Markdown called reStructuredText (reST). It's very similar to Markdown (MD) so it's never bothered me, but I know some people get very annoyed at the "uncanny valley" between reST and MD. reST has some very powerful yet simple features; it perplexes me that these aren't adopted in other docs systems. For example, to cross-link you just do :ref:`target` where `target` is an ID for a section. At "compile-time" the ref is replaced with the section title text. If you remove that ID then the build fails. Always accurate internal links, in other words.

The extension system really works and there is quite a large ecosystem of extensions on PyPI for common tasks, such as generating a sitemap.

The documentation for Sphinx is ironically not great; not terrible but not great either. I eventually accomplish whatever I need to do but the sub-optimal docs make the research take a bit longer than it probably has to.

I have been a technical writer for 11 years and have used many SSGs over the years. There's no perfect SSG but Sphinx strikes the best balance between the common tradeoffs.

[1] https://www.sphinx-doc.org/en/master/index.html



You can use markdown by adding the md suffix to your conf.py and myst_parser [1] to your sphinx extensions.

They are starting to work towards full sphinx functionality in myst markdown, too.

[1] https://myst-parser.readthedocs.io/en/latest/intro.html [1] https://executablebooks.org/en/latest/blog/2023/new-project-...



I can't recommend this enough! It's such a quality of life improvement to get the powerful dynamic documentation features of rST and Sphinx (and its many extensions), but in the more pleasant and familiar syntax of Markdown. I use MyST + Sphinx for all my project docs now.

This podcast episode is worth a listen for anyone interested in these tools and where they're headed: https://talkpython.fm/episodes/show/354/sphinx-myst-and-pyth...



A Sphinx plugin[0] allows for writing in markdown, and I'd heavily encourage using it if you're looking to get widespread adoption of sphinx on a project or at a workplace. Rst is fine once you learn it but removing barriers to entry is useful.

[0] https://www.sphinx-doc.org/en/master/usage/markdown.html



edit: my understanding of feature parity in reST/Markdown seems outdated - comment below might be incorrect

The value prop of Sphinx goes down a lot if you're not using reST because you can't use the extensive catalog of directives, such as the ref directive that I mentioned in my first comment. If you must use Markdown then there's not much difference between Sphinx and all the other Markdown-powered SSGs out there. In other words there's not a compelling reason to use Sphinx if you've got to use Markdown.

From Sphinx's Getting Started page:

> Much of Sphinx’s power comes from the richness of its default plain-text markup format, reStructuredText, along with its significant extensibility capabilities.

https://www.sphinx-doc.org/en/master/usage/quickstart.html#g...



Myst has parity with most reST features and is equivalent to markdown for users not using those features: https://myst-parser.readthedocs.io/en/v0.13.7/using/syntax.h...


Oh cool I need to revisit MyST then. I thought it didn't support cross-references but it looks like I'm wrong: https://myst-parser.readthedocs.io/en/v0.13.7/using/syntax.h...

I will have to dig into exactly how much parity we're talking here but if it's very strong parity then I redact my previous statement

Thanks for correcting me!



It works with all docutils and Sphinx roles, and almost all directives, including extensions.

A notable exception is autodoc (automodule, autoclass, etc.), and any other directives that generate more rST. The current workaround is to use eval-rst:

https://myst-parser.readthedocs.io/en/latest/syntax/code_and...

Some more discussion about that in these issues:

https://github.com/executablebooks/MyST-Parser/issues/163

https://github.com/sphinx-doc/sphinx/issues/8018



I have always found sphinx challenging, in usability or syntax :( It could be probably much more advanced, but I went for pdoc3 for api docs and mdbook for documentation in general.

What I really hope that exists, is a system where I can reuse the documentation (sections) in other pages, ergonomically

I built that system multiple times to do preprocessing with things like including parts or special linking or referencing images from anyhwere

https://github.com/xmonader/publishingtools/tree/development...



https://ncase.me/nutshell/ is pretty neat. I don't know how well suited to "serious" applications it is, though.


I've used Sphinx quite a bit, the syntax is a bit fugly. For markdown's

   [text](url)
one uses

   `text `_
...


I think this is also one of my least favorite syntax choices in rST. I second the suggestions to use Sphinx with Markdown via MyST.


irobinovitch just corrected me that the library that provides Markdown support for Sphinx supports the features [1] that I care about; have to dig into the details but if the feature parity is very good and you strongly prefer Markdown over reST then I would say... go for the Markdown!

[1] e.g. https://myst-parser.readthedocs.io/en/v0.13.7/using/syntax.h...



Just want to +1 this, and also add a twist. The Sphinx community also has a great extension called hieroglyph, which lets you use rST directives to build slide presentations which also double as single-page HTML notes documents.

https://hieroglyph.readthedocs.io/en/latest/getting-started....

This meant I could first write a blog post on learning Clojure as a Pythonista[1]; then turn some code samples and tables and images into slides I could present at a public talk on my laptop or desktop[2]; and then finally publish a public notes document that talk attendees could use to easily study or copy-paste code examples[3]. (The notes are the exact same contents of the slides, just rendered in a simple single-page HTML format, with each slide transformed into a section heading, with permalinks/ToC auto-generated.) So, this is generated HTML from a single .rst source[4], all the way down! And, of course, I could version control and render the .rst file powering the slides / notes / etc. in GitHub.

[1]: https://amontalenti.com/2014/11/02/clojonic

[2]: https://amontalenti.com/pub/clojonic/

[3]: https://amontalenti.com/pub/clojonic/notes/

[4]: https://amontalenti.com/pub/clojonic/notes/_sources/index.tx...

Note: the slides in [2] do not play well on mobile. You are meant to use keyboard arrows to advance and tap “t” to switch into tiled mode (aka slide sorter) and “c” to open a presenter console. The slides are powered by a fork of html5slides, which will look familiar if you’ve seen the JS/CSS slide template that Go core developers use in https://go.dev/talks (they generate those with “go present,” a different tool, though).

More recently, I have also used a similar-in-spirit tool called marp (https://marp.app) for generating technical slides from source, but the output and functionality was never quite as good as rST + Sphinx + hieroglyph. The big advantages to marp: Markdown is used as the source, some tooling allows for VSCode preview, and PDF export is fully supported alongside HTML slides.

I have a soft spot for Sphinx, not only because it was responsible for so much great documentation of Python open source libraries (including Python’s own standard library docs at python.org), but also because the first comprehensive technical docs I ever wrote for a successful commercial product were written in Sphinx/rST. And the Sphinx-powered docs stayed that way for a ridiculously long time before being moved to a CMS.



I don't know if they're "unpopular", but I think the BEAM family of languages, Erlang, Elixir, LFE etc. could be used more. I read more and more problems people have on here and just think that they go away on the BEAM.


My absolute favorite framework I've ever worked with is Akka.NET and how it taught me how to operate against concurrency in a different way. Actor-based infrastructures and other Erlang-inspired concepts are really just wonderful and they need a whole lot more attention, yes!


I worked for a long time with Akka and Scala and share the same sentiment. It packed so much power and yet felt so intuitive. Now every time I pick up a new language I instinctively look for an Actor based framework in it.


Firejail is cool: https://github.com/netblue30/firejail

Linux namespaces/cgroups but lighter than Docker.

I use it when I want to limit the memory of a Python script:

``` maxmem="56" #GB

firejail --noprofile --rlimit-as=${maxmem}000000000 python myscript.py ```



How is it better for this use case than just using rlimit?


I'm not sure about this specific use case, but a reason for using cgroupv2 over rlimit is that cgroup allows you to limit the resources of a _group_ of processes, which is handy if, say, your Python script uses the `subprocess` module.


have to say: i really like the idea of `firejail firefox` thanks for sharing that!


Asciidoc lightweight markup can be used in place of ANY complex XML-based CCS (component content system), i.e. DocBook, DITA, S1000D, 40-50-something MIL-STD "specifications". Asciidoc can do anything they can, and can do it cheaper, faster, and better. With standard tooling that's everywhere you have a computer.

I'm not sure I can type out, with trembling fingers, how many dollars have been flushed down the toilet of CCSs by businesses that either had no business experimenting with componentized content, or businesses that didn't have resources for training up staff, or vendors who literally evaporated like morning dew after they'd gotten their initialization fees. So just one single story: one prime aerospace vendor I worked with had started their road to S1000D publishing in 2009. Today - at the end of 2023, and more than twenty million dollars later, with a garbage truck full of sweat and blood - that system has not released a single publication to the end user. Not one.



Turn your Postgres database into a REST API: https://postgrest.org

Previous discussions on HN: https://hn.algolia.com/?q=postgrest



Is there anything similar for Sqlite?


Don't overlook https://datasette.io/ even though it does much more than endpoints.


Yes. Search the table at https://github.com/dbohdan/automatic-api for "SQLite". Disclosure: this is my repository.


Exactly that (same api structure) https://docs.subzero.cloud/


Wow; this is extremely nice for quick prototyping!


I use it in production. Handling roles and permissions in the DB is a game changer.


But beware row-level security. It's great, but the query planning is a huge mess, and you can end up with serious performance issues real fast if you're not careful.


My understanding, perhaps flawed, is that row security policies add predicates to the SQL, which then encounters the query planner like any other. Is this not the case?


At one point, I really thought it was used in Supabase. But I guess they only wrote the js wrapper for it. https://github.com/supabase/postgrest-js

Came here to mention Hasura as well (not sure of it's popularity though) https://hasura.io/graphql/database/postgresql





I would not recommend Hasura if you're on PostgreSQL.


Since I generally have no clue what technologies are popular (other than the obvious big name projects) I'll just toss out some interesting links I've recently bookmarked in comments here.

- gron (Greppable JSON): https://github.com/tomnomnom/gron

- MarkDownload (Markdown Web Clipper): https://github.com/deathau/markdownload

- Lean4 links:

-- Theorem proving: https://lean-lang.org/theorem_proving_in_lean4/introduction....

-- Natural Number Game: https://adam.math.hhu.de/#/g/leanprover-community/NNG4



Highly recommend the natural number game


* IPv6. A genuinely useful tool, in particular for homelabs: multiple globally routable addresses per machine. One address per service. No need for Host/SNI vhosting. Works well with containers. To get v6 support, either find ISPs/SIMs that do v6, or wireguard to a VM that providss a /56.

* SSH ForcedCommand. Lots of usecases here, for backups, file storage, git, etc.

* Verilog as a tool for software developers to learn digital electronics. VCS/code/simulation/unit tests are all a lot more familiar and expected for developers.

* Writing tools yourself. There's often decent stable libraries that do 90% of what you want, and the remaining 10% is less effort than dealing with awkward integration with off-the-shelf tools. This relies on having low overhead packaging/deployment, e.g. Nix/Guix/Bazel.



In the vein of your last point, I use ChatGPT4 to write all my one off scripts for odd tasks. Without knowing Python I worked up a script that can figure out which asset I have selected in UE4, grab the text from it, send it to elevenlabs to create a text to speech conversion, convert the downloaded mp3 to wav, import into UE4 and then set that as the assets (dialogue wave) voice over line…


Do you have a source for a cheap cloud vm like you suggested?


If you just want to play with IPv6 on a VM, most VM providers will offer a /64, which is enough to have an address per service on your machine. If you wanted to play with IPv6 on multiple subnets, you'll need something larger than a /56, since subnets should be /64.

I rely on my home's v6 /56, so I don't have experience with using VMs for this, but I know of a few providers that offer /56 (and above):

* Mythic Beasts and Linode offer a /56 on request. They're not cheap VM providers though.

* https://ifog.ch/en/vps offer /48.

* https://tunnelbroker.net/ offer /48, which can be used via any VPS/home.

https://reddit.com/r/ipv6 for more info.



Lithium Titanate batteries. Nothing else is lightweight, safe, currently available, and lasts 20000 cycles.

ESPHome. It's a framework for declaratively building firmware for microcontrollers, based on rules like "This pin is an input with debouncing, when it changes, toggle this".

Contributing to them has probably been the most fun I've had programming in years.

We just need power management, and a C++ implementation of the Native API client. It's so close to being able to replace most of what I'd normally code by hand in Arduino.

https://esphome.io/

RealThunder's fork of FreeCAD: https://github.com/realthunder/FreeCAD

They fix so many issues. Linear patterns can duplicate other linear patterns!

Vorta: It's the best backup technology I've seen. Just an easy guided GUI for Borg, which gives you deduplication. I just wish they let you deduplicate across multiple repositories somehow.



ESPHome looks really cool!

I've been looking for a more convenient way to configure some ESP32-based input devices (similar to macropads). I was interested in QMK, but it doens't support ESP32. So far I've been using MicroPython / CircuitPython, which I generally like, but on multiple occasions I've thought "I wish I could just put this in a config file."

The matrix keypad and key collector components look similar to what I was looking for. Can the key collector be used with other multiplexing methods like shift registers?



MicroPython was what I used before ESPHome too! I think ESPHome could really benefit from a scripting component, but adding one seems like lots of work.

You can send keys directly to the key collector from wherever you want, but you'd probably have to configure an individual action for each key, unless there's a feature I'm not seeing.

Maybe you could create a new ShiftRegisterKeypad component?



Looks like RealThunder is a bit behind FreeCAD's official repo - last commit in May.

Lithium Titanate sounds interesting - TIL...



RDF and semantic web used to be my goto's for this, as I believe many of the core ideas are still valid and often overlooked and sometimes even poorly re-implemented. Which says something.

However, lately I've come to like llama.cpp and friends, yes it's not ChatGTP miracle whatever but how often do you /actually/ need that? Despite its tremendous popularity, it still seems like something more people should know about. For me, I've had great fun with running LLMs locally and experiencing their different "flavors" from a more "phenomenological" (what is it like to use them) perspective rather than a technological one.



I’m doing a personal project using RDF. Not semantic web. Not OWL. Just “raw” RDF. And I really like it.

It’s perfect (so far) for my purposes of an extensible data model.

I’m sure others have augmented applications with “generic” data types (like properties and such). You always walk this fine line that if you fall to far you find you’re writing a database on top of a database.

We’ve also in the past fallen into that hole when building a DB schema that we stumble into what we coined the “absurd normal form” or, also colloquially, the “thing-thing” table that relates everything to everything.

Well, RDF is the thing-thing table, and it just embraces it. And for my project it’s a lot of fun. I have structured types, with specialized forms and screens. But, if desired, the user can jump into adding relations to anything. It’s essentially an RDF authoring environment with templates and custom logic to make entities. And in the end they can always dive into SPARQL to find whatever they want.

It’s not intended to work with zillions of data items, it’s just a desktop tool. I always found it interesting early on that the primary metric for triple stores was how fast they could ingest data, I guess nobody actually queried on anything.

Anyway, it’s fun and freeing to work with.



I really like ARM (automatic ripping machine)

https://github.com/automatic-ripping-machine/automatic-rippi...

Put a DVD/blu ray in a drive and it automatically determines the title, starts ripping, then pops the disc out when it's done.

There's options for post-ripping transcoding also.



Definitely Forth and Factor. Every programmer should get a little bit familiar with the concatenative style. 3 days ago i discovered a channel on YT which talks about Forth, even Chuck Moore himself did a talk there talking about GreenThreads and the like.

Given that webassembly is a stack language with no GC, i do expect a comeback of concatenative programming some time in the future.

https://www.youtube.com/@siliconvalleyforthinterest1736



Agreed, but recommend thinking of wasm that way. Wasm is not written as a proper stack machine, it's just a way to represent computations that can be optimized across architectures.

I literally made this mistake, creating a wasm interpreter, before I realized it was a terrible runtime bytecode.



Came here to say this, so I entirely agree. I found Forth and the concept of concatenative languages after deep study of the fundamentals of computing, specifically studying Lisps and the Lambda Calculus. Eventually found combinators and the Iota combinator. Finally hit the bottom of the rabbit hole!

It really does give the lightbulb moment. “Don’t try to generate code, that is impossible. Only try to realize the truth… There Is No Code (only data)”



How are you using the Forth philosophy? Did you build your own, using another system?


I went through a similar path! Concatenative appeared to me like the most economic paradigm one can possible come up with, a ultimate reduction, for which there’s practically no path to further downward abstraction. It feels more like a primitive building block than anything else. I always admired the design of Unix pipes, and flow oriented programming in general, then you realize that these things are just natural to stack processing, you need to introduce nothing. It’s like you’re programming with order itself. Programming is taught and practiced in a very convoluted way, and it makes you think that complexity must somehow stem for the lower levels of abstraction, until you get a grip of stack virtual machines, they couldn’t be simpler in their innate mechanics. I don’t know if it’s only me, but I used to think Turing Completeness was something challenging to achieve in a system, a hallmark of sophisticated complexity, but as I understood stack based languages I realized it’s the opposite, it’s the hallmark of simplicity. I wonder what it's like to have had Forth et al as a first language…


Are you using Forth day-to-day? And in what form?


- Capability Based Security (NOT the permissions flags on your phone or "app") - Offers the possibility for honestly secure computing

- Data diodes (unidirectional networks) - allow you to monitor a network without allowing external control (or only submit info, never ever exfiltrate it)

- GNU Radio - you can play with your audio ports, and learn instinctively how do deal with all the stuff that used to require DSP chips... then apply that knowledge with a $30 RTL-SDR dongle.

- Lazarus - seconding the above... a really good Pascal GUI IDE. The documentation needs work, but it's pretty good otherwise.



WinCompose¹, or more generally, use of a Compose key² to type all sorts of Unicode symbols or really any character (sequence) you like. People are used to thinking that they mostly can’t type what they don’t see on their keyboards, but a Compose key provides a universal method to type a large repertoire of characters through use of mnemonics.

¹) http://wincompose.info/

²) https://en.wikipedia.org/wiki/Compose_key



I used to use the compose key lot, currently I really like Espanso. It does arbitrary text replacement, has some pretty fancy features, but is also quite useful for turning \alpha or :laughing: into the symbols I want.

https://espanso.org/



Bottle.py: uber-fast and simple python web microframework, about 3x faster, saner, and more memory-efficient than Flask in my experience: https://github.com/bottlepy/bottle

Fossil: distributed version control and much more in a single executable, from the creators of SQLite: https://fossil-scm.org/



In the same vein, I'd name Tornado (www.tornadoweb.org). Also rather small and comprehensible, but with full async support that's evolved extremely nicely. Generally I love how well-designed and maintained it is.


I still use Bottle for all my starter projects. It’s just unmatched in terms of bang for the buck.


https://en.wikipedia.org/wiki/Pick_operating_system

This has accounted for about 90% of everything I've built since 1985.

Pick code generates my side project: https://eddiots.com/1



“ It is named after one of its developers, Dick Pick.[2][3]” you can’t make this stuff up!


I know, not reddit, but I simply can't resist:

> Pick was originally implemented as the Generalized Information Retrieval Language System (GIRLS) on an IBM System/360 in 1965 by Don Nelson and Dick Pick [...]



I'm sad hearing Dick Pick died aged 54 of stroke complications.


That's Richard A "Dick" Pick. Show some respect.


My first job involved working on a Pick system. The system started life on a Prime mainframe and was migrated to UniVerse on Solaris.

I seriously miss it.

Every once in a while I try to get back into it. Usually it takes the form of trying (and failing) to get a demo/personal version of UniVerse, but lately I've been poking at ScarletDME a little bit. I'd even pay money (not much since this is just hobby stuff, but some) for UniVerse, but even the cost of it seems to be a closely guarded secret.



Thanks, Mister_Snuggles, for reminding me I'm not the only one left.

I HAVE to code in PICK.

"Unless it comes out of your soul like a rocket, unless being still would drive you to madness or suicide or murder, don’t do it." - Charles Burkowski

(Funny, they named the current support company "Rocket".)

Here's the link to the current Universe trial version (free and good until 04/2025. Get it, install it, and make something with it. Please don't let that part of you die.

https://www.rocketsoftware.com/products/rocket-multivalue-ap...



Yup, this is exactly where I get to when I try and fail to get UniVerse.

What's the trick to making that form work? It won't accept my @gmail.com address, and I don't really want to use my work email address and potentially mis-represent things. Especially since my work used to use one of Rocket's products.



I used my work email and then forwarded it to my g mail.

If you have concerns about doing that, you can just download it from my website at

http://eddiots.com/UVTE_WINDOWS_11.4.1.zip (You may have to cut and paste this link into a new tab. HN doesn't seem to like this.)

If you have any problems or need the UNIX version, just reply here or contact me. email on my profile. Let me know how it goes.



Windows only?


UNIX too


Collection of books about Pick: https://jes.com/downloads


this project is AMAZING


Thanks, aredirect!

My next phase is to put the PICK-generated svg into codepen and provide links to show how to draw the art with code.



Just making the links clickable:

Couple of things I like

- tarantool https://www.tarantool.io/en/

- rebol/red-lang https://www.red-lang.org/

- U++ : https://www.ultimatepp.org/

- lazarus: https://www.lazarus-ide.org/

- fasm: https://flatassembler.net/



I've to admit i tried to edit the post multiple times, even checked the formatting options https://news.ycombinator.com/formatdoc (having links between angle brackets didn't work :( ) sorry for the inconvenience and thank you for doing the effort :)


> Urls become links, except in the text field of a submission.


Any explanation to way it's a thing?




- D (the programming language). It's a lot like C++ if C++ were rebuilt around modern concepts like modules.

- Matrix. It's pretty popular but I see way too many open source projects still saying "join our Discord!" instead of "join us on Matrix!"



Matrix has some usability hurdles. I invited people to join our Matrix and a few did, but when I switched to inviting them to Discord, 10x more people came by and are still there. I prefer Matrix for several reasons but you go to where the people are.


+1 for D.

Python took 20 years after its introduction to become popular as today, thanks to its more intuitive syntax that was based on ABC.

I really hope after 20 years of its introduction that D will be appreciated and becomes a de-facto language not unlike Python is now. Perhaps even more popular with the advent of connected tiny embedded sensors and machine in the form of IoT are upon us.



D has been out for 20+ years, or did I misread you?


Yes you are right, should be D2 since it has breaking changes to D1.


Not sure if it's because of starting as a web/application developer and but I wish I was introduced earlier to:

- in-process databases (rocksdb, sqlite)

- FoundationDB

- C/C++ and low level programming in general (I wished I learned those instead of js when I was younger)

- State Machines, Actor Model (Orleans Net), Event Sourcing

- Bittorrent for other things than pirating (it looks like it's dying)

- Arduino and similar

- Seastar

- Arrow (ecosystem)



Oh that's easy: Prolog. Have a look under 2023 here for unusual applications thereof: https://emiruz.com/

and here for a book to learn it from: https://book.simply-logical.space/src/simply-logical.html

I think it is the closest thing to a "tool for expressing thought" with a proof procedure, which presently exists.



yes! so many times I reach for Prolog because it's a perfect fit for a problem. like modeling constraints, or generating a plan, or working out a type system. it's relatively easy to switch between functional and imperative styles, but replacing Prolog requires writing heavy algorithms.


Vopono (https://github.com/jamesmcm/vopono):

"vopono is a tool to run applications through VPN tunnels via temporary network namespaces. This allows you to run only a handful of applications through different VPNs simultaneously, whilst keeping your main connection as normal.

vopono includes built-in killswitches for both Wireguard and OpenVPN."



https://gchq.github.io/CyberChef

the cyber swiss army knife





For writing documentation: AsciiDoc [1] as fileformat.

For publishing documentation / to build the web site: Antora [2].

AsciiDoc has a bit more features compared to Markdown which allows for a richer and more pleasant presentation of the docs.

Antora allows you to have the project documentation in the actual project repositories. It then pulls the docs from all the different repos together to build the site. This also allows you to have the released product versions go in-synch with the docs versions. Antora builds each version of the product as part of one site. The reader can explore different product versions or navigate between pages across versions.

===

[1] https://asciidoc.org/

[2] https://antora.org/



When I read about the "Docs as Code" approach, I never see mention of Antora. Is this an oversight ?


Growing in popularity, but still not as famous as it should be: EdgeDB (https://EdgeDB.com)

* Graph-relational database

* Queries return objects linked to other objects through properties, not rows

* ... But it's still Postgres under the hood

* Open source



I’ve really enjoyed working with EdgeDB. I totally agree. I’m on a project now that’s using firebase/firestore and edge seems dramatically better suited, but it would be a hard sell.


Lua. It has the feel of python but is implemented in like 15k lines of C. Definitely my favorite scripting language.


Black and white film processing. It used to be taught in schools. Many schools still have their darkrooms and no longer use them. It is a practical application of physics, chemistry, and art.


As a kid (11 or 12) I made an enlarger out of an old discarded slide projector, a dimmer switch and a scrap wood frame. I managed to scrounge enough money to buy supplies to make a few prints-- it worked pretty good. But, supply costs were out of reach, so those first prints were all it ever made.


Unpopular, at least compared to cars: e-bikes.

- costs next to nothing to charge

- fast and fun to get around

- never pay for parking

- cheap maintenance

- hauls groceries easily

- good exercise



To add, you can convert acoustic bikes to e-bikes with torque sensing using the tsdz2 which is pretty decent once flashed with the open source firmware by casainho.


I do all my


[flagged]



Sorry, but many localities make it illegal for bicycle on the sidewalk (where they should be doing it). That's why many folks do it.

Laws making that illegal are extra stupid since it's relatively hard to kill a pedestrian with a bicycle but downright easy to kill a cyclist with a car.



> Sorry, but many localities make it illegal for bicycle on the sidewalk (where they should be doing it).

No, they shouldn't. The sidewalk is for pedestrian traffic; that's what the "walk" in the name signifies.

> Laws making that illegal are extra stupid since it's relatively hard to kill a pedestrian with a bicycle

Sidewalks can't handle much bike traffic, are suboptimal for it (which is why purpose-built separated bicycle trails are built like roads, not sidewalks), and are in many places less safe for bicyclists, crossing driveways with less visibility for drivers and bicyclists than is the case with the road proper.



Many bicyclists don't want to ride on the sidewalk

Sorry you're forced to slow down and pay attention occasionally



You mean roads?


Roads are for bikes.


That would be insane, but less insane than if they meant dedicated bike trails (which are just as much "pavement" as roads are.)


I mean pavement.


The LV2 audio plugin standard[0], and related stuff like the Atom format[1] used to feed arbitrary data between plugins in realtime.

[0] https://lv2plug.in/ [1] https://lv2plug.in/ns/ext/atom

Edit: Hydrocolloid blister plasters



Slack released a peer to peer overlay networking tool called nebula, which has gotten very little attention on HN.

It's a really simple alternative to something like wireguard.



Big fan of tusker (https://github.com/bikeshedder/tusker) for PostgreSQL migrations. Tusker takes a SQL-first approach; You write your schema in declarative DDL (I have my entire project in one schema.sql file) and when you edit it, tusker generates the sql code required to migrate. It uses temporary test databases to run both your declarative DDL and your step-by-step migrations to ensure they are in lock step. And it can connect to live databases and diff your schema/migrations against reality. I've never seen a better toolkit for schema evolution.


Imba. The best web programming language ever made.

https://imba.io/





https://www.edgedb.com/ is pretty amazing. postgres queried with a modern language, you can treat relational data like graphs without dealing with joins... baked in migrations management and more...


My picks would be Zulip for messaging, and Adguard Home instead of pihole. And self host your email - if only to keep that skill alive.

The openSUSE build system is also great for building packages for a lot distros. It's not just for openSUSE.



What advantages are there for AdGuard Home over Pi-hole?


It's a single go executable that's much easier to install and keep up to date. It's been a while since I've used pihole but Adguard Home also had a better GUI when I first started using it.


For me the main reason to switch to AdGuard was that it can easily run on OpenWRT (and PiHole can't). It's really convenient to run DNS adblock on the same device as your router.


StateCharts - similar to Finite State Machines but in a Tree Data Structure


This is probably old-hat to you, but I enjoyed this recent StateChart video: https://www.youtube.com/watch?v=-6VzMNoHhtU


Podman - containerization tool I use in all of my side projects


https://github.com/webui-dev/webui is a lightweight alternative to webview/electron/tauri


Dumb question but I thought tauri was the lightweight alternative to electron. Did I remember that incorrectly?


Lightweight often just translates to less features. Unless you're rewriting a truly bad piece of software, you're "lightweight" alternative will be just as heavyweight when you're done reimplementing everything


Yes, WebUI uses the real installed web browser, so no rewriting is needed like WebView. The lib is 200 Kb !!!


Everyone who wants to be a robust developer should get at least a little experience with some non-c-like languages.

Of course someone will reply with a more complete language, but I'll start by throwing out array-based languages, in the form of J: https://www.jsoftware.com/#/

Once you really get your head around composing verbs, really working with arrays, and using exponents on functions, it's mind-expanding.



vidir from moreutils (https://manpages.debian.org/bookworm/moreutils/vidir.1.en.ht...) and qmv from renameutils (https://manpages.debian.org/bookworm/renameutils/qmv.1). They let you (mass-)rename files using your editor when your editor is not Emacs. If you happen to use OpenBSD, note that moreutils has an OpenBSD port but renameutils doesn't and does not build on OpenBSD.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com