(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38733282

你是对的。 机械定时器开关通常更便宜且更容易实施,特别是当您只需要粗调间隔或不频繁的时间安排时。 在这种特殊情况下,泵无论如何都只能运行较短的时间。 如果有足够的零件,这绝对是一种有效的替代方法。 但是,如果您喜欢编码或想要更细粒度的时间表控制,您还可以修改普通 ESP 控制器的草图以合并计时元素,例如:“setTimeout”(),或使用现有的库,如“moment”。 但是,是的,有时您会选择最实用和最高效的道路,有时您会选择一条可以教给您宝贵经验的路线,或者通过拥有一个令人惊叹的复古风格仪表板(看起来很棒)来让您享受劳动成果。 适当数量的故障和/或像素艺术,或者甚至可能结合这两个目标,就像我在仪表板项目中所做的那样。 干杯!

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What's your "it's not stupid if it works" story?
368 points by j4yav 1 day ago | hide | past | favorite | 465 comments
Cursed hacks, forcing proprietary software to do what you want through clever means, or just generally doing awful, beautiful things with technology?










Not my idea or implementation.

Our startup built a plugin for Microsoft Outlook. It was successful, and customers wanted the same thing but for Outlook Express. Unfortunately, OE had no plugin architecture. But Windows has Windows hooks and DLL injection. So we were able to build a macro-like system that clicked here and dragged there and did what we needed it to. The only problem was that you could see all the actions happening on the screen. It worked perfectly, but the flickering looked awful.

At lunch, someone joked that we just had to convince OE users not to look at the screen while our product did its thing. We all laughed, then paused. We looked around at each other and said "no, that can't work."

That afternoon someone coded up a routine to screenshot the entire desktop, display the screenshot full-screen, do our GUI manipulations, wait for the event loop to drain so that we knew OE had updated, and then kill the full-screen overlay. Since the overlay was a screenshot of the screen, it shouldn't have been noticable.

It totally worked. The flickering was gone. We shipped the OE version with the overlay hiding the GUI updates. Users loved the product.



The thing (screenshot and all) was routine done in the 90s with all those fancy non-rectangular applications/demos/launchers [usually available on CDs that came w/ magazines]. They had transparent/alpha zones that copied the screen under them.


Maybe some incompetent people did it that way, but 1-bit transparency was very well possible with native Windows APIs in the 90s (see SetWindowRgn). Later on (starting with Windows 2000 IIRC) it was also possible to have semi-transparent (alpha-blended) regions.


It's not stupid if it works...


Please educate yourself before you accuse people of incompetency. Of course it was about (pseudo) alpha blending because "smooth shadows" around everything became very popular in the late 90s.


In a thread about “if it’s not stupid if it works” you accuse people of doing something “incompetent” by doing something ostensibly “stupid but works” but totally miss on what was even possible “non-stupidly.” There feels like some form of irony here.


I'm not convinced it works though. A screenshot would break any animations playing underneath


Yes, obviously there a grain of irony in the reply. ;)


Not exactly. With 2000/XP you can set the entire window opacity. Still no gdi+ native. So regions were still just shapes(1bit mask). Trying to set a pre-multiplied bitmap to a window will just give you a brightened version(the multiplied version).

Though there was some support for cursor shadow at this point, and the improved GDI heap really helped, making faking a window drop shadow actually pretty feasible. Vista and Aero is the first native support for Windows with alpha channels.

I actually liked Vista. It worked just fine and wasn't unnecessarily reorganized. Plus the improvements to the OS threading model were excellent, which is why 7 is so incredibly rock solid, probably peak Windows.



Ahh so THAT's how they did that!


It depends. Windows can be set to shapes going back to like Windows 95, iirc.

But alpha, specifically, would be faked at that point. Windows 2000 supported alpha for such things in a basic way, like XP. Vista with Aero and 7 then really expanded the themes and the window compositing.



Back in the day terminals under Linux did that to fake transparency: they just copied a chunk of the root window (where you put some wallpaper) as the terminal background and applied an alpha layer if the user wanted a translucid color.


Fake transparency is still a thing, for desktops without compositor. Not only in terminals, but also in docks.


But docks work in a different way, such as XRender or just a PNG with an alpha layer. Much less expensive than copying a chunk of the backdrop and then "pasted" at your terminal background.

Often docks will be able to show you the underlying windows behind it, even without compositing. But under the terminal with fake transparency you couldn't see another window behind it, just the backdrop part.

I think X11 had some extensions to allow translucency too for software like oneko, xroach or such.



From my experience this is wrong. If you set a transparent background in wxwidgets/gtk it will show you a grey background. And gtk3 even removed the ability to query the x background pixmap. So if there is better support for this at an X level, this is not available above.

See https://github.com/onli/simdock/issues/11, and if there really are good alternative solutions I'd be happy for some help.



Ah, too bad then. So, lots of stuff got deprecated, I didn't notice that until right now.

Still, there was another tech not querying the background pixmaps, but used from old X11 software to do non-rectangular windows. Oclock, maybe.



Why is it less expensive? Sounds like the exact same operation, except the OS is doing it.


No. XRender worked in a different way than ATerm/Eterm.


Early versions of iOS did this too. You’d tap an app icon and the app would appear instantly in its prior state … but it was just a saved screenshot that iOS displayed while the app was actually loading.


Really it still does, in a way. The ”apps” in the app switcher work that way. Even if the app is still live in memory, it’s not rendering to the switcher all the time. And if it was killed, then the pic is all that’s left.


That's called double-buffering.


I think it was meant as a joke, so for anyone else reading it, no it is not actually double-buffering.


Half a joke, because the concept is very much the same. You "paint" to an invisible buffer and then you swap.


Yeah, but the concept of double buffering is to swap every frame, for performance gains.

Here nothing gets swapped, and just temporary the screen gets hidden. So vaguely similar ... but not very much the same in my opinion.



freeze-frame buffering


double-bluffering?


They invented double-buffering! :)


While this is a great hack, this feels like a terrible product to sell to customers. I work on a game with an SDK for plugin development, but sometimes third party developers go off and want to do things that the SDK doesn't offer. Since they get to be in our address space, it's pretty trivial to snoop around internal resources if you really wanted to. The problem is, when we update some core parts of the engine, we tend to break these things. Then users get upset with us and the third parties start to complain that we broke their hacks. It's just a bad experience for everyone.


You could just overlay the Outlook window.


I absolutely love this.


Love it, lol


I created the most popular Turkish social platform, Eksi Sozluk, using a single plaintext file as its content database back in 1999. It had taken me only three hours to get it up and running without any web frameworks or anything. It was just an EXE written in Delphi. The platform's still up albeit running on .NET/MySQL now and getting banned by Erdogan government for baseless or false reasons (like "national security"). Despite being banned, it was the seventh most popular web site in Turkey two weeks ago, and the second most popular Turkish web site in the same list: https://x.com/ocalozyavuz/status/1735084095821000710?s=20

You can find its ancient source code from 1999 here: https://github.com/ssg/sozluk-cgi

The platform is currently at https://eksisozluk1999.com because its canonincal domain (https://eksisozluk.com) got banned. Any visitors from outside Turkey should get redirected anyway.

Since it's still a legal business entity in Turkey, it keeps paying taxes to Turkish government, and even honors content removal requests despite being banned. Its appeals for the bans are in the hands of The Consitutional Court to be reviewed for almost a year now.

A newspiece from when it was banned the first time this year: https://www.theguardian.com/world/2023/mar/01/eksi-sozluk-wh...

Its Wikipedia page: https://en.wikipedia.org/wiki/Ek%C5%9Fi_S%C3%B6zl%C3%BCk



Crazy to see you! Some time ago, I was actually looking to add Eksi to Touchbase (www.touchbase.id) since several users reached out and wanted to add it alongside their other platforms to share on their profile, but we couldn't find out the URL convention for user profile feeds! It seemed to be "https://eksisozluk1999.com/{{username}}--{{7 digit value}}", but we couldn't find any rhyme or reason to the 7 digits. Are the integers random, or do they even go back to stemming from a convention from the previous codebase?


User profiles are actually stored like https://eksisozluk1999.com/biri/{{username}}. "/@{{username}}" also redirects to "/biri/{{username}}". You shouldn't need numbers at all. The numbers are only at the end of topic titles. They are title id's (sequential integers assigned when they're created) to disambiguate conflicting Latinized forms of Turkish words.


back in 1999 or so, I wrote an online shopping site this way, all the data stored as text files (one per category, with many items in my case ... I was 18 years old and had no idea about databases). The site ran smoothly for almost a year until the customer used "*" in the name of a product... which was the character by which all the product data in the text files data was split...


That's why you always delimit your data fields in a text file with a unicode snowman

Surely noone will ever use that character!



live and learn. It was the re-split when they saved the new products through my brilliant parser that royally fucked it all up. Genius that I was, I used "|" to separate attributes, but I also definitely used a double asterisk to mean something else. Nothing teaches you not to get clever better than screaming customers and abject failure. And having to find/replace a thousand asterisks to figure out which ones were making the file unreadable. Falling on my face made me the careful coder I am today.


Early career chap over here. Awesome hearing stories like this. Those wild west days certainly have passed. We’ve got so much now to get us started as programmers that it almost robs us of learning experiences somehow.


> Those wild west days certainly have passed.

Not if you see some of the stuff my coworkers write.



Hah. Well you always need to just learn new things. That's what my life taught me.

Check it out. Year is 1999 or so - [edit: scram that, more like 2001] and I'm working at a Starbucks on my laptop. Mind you, wifi does not exist. Having a color laptop is sort of posh. One other person who shows up there every day, this kid Jon who's my same age, he's got a laptop. We end up talking. No one even has a cell phone.

Jon's my age and he's writing PHP scripts. So am I. I have a client I built a website for that needs an online store - they sell custom baby blankets and car seat covers. They want a store where you can choose your interior fabric and exterior fabric for each item, and see a preview. They have 10 interior and 20 exterior fabrics. They sew these blankets by hand for each request, for like $100 each. This is a huge job at the time... it pays something like $4000 for me to write the store from scratch. (I'd easily charge $60,000 now for it). First I have to mock up 200 combinations in photoshop?... so instead I write a script that previews the exterior and interior fabrics. Then I write a back-end in PHP to let them upload each fabric and combine them.

One day I'm sitting at the next table to Jon (he was working on a game at the time, I think - fuck, who knows, we were both 18 year old drop outs) - and I showed him how I wrote these fabric combinations to text files. And he was like... "Dude, have you tried SQL? It's AMAZING!" And I was like, "what the fuck is SQL?"

Yes, people used to pay idiots like us to build their websites. I'm still sort of proud of a lot of shit I got to do back then. But I am thankful to Jon that he introduced me to SQL when I was at the time trying to invent databases from scratch with fopen('r') and fopen('w') and hand-built parsers ;)

[edit] Just one little thing I'd note my friend: If you have a brain, it's always the wild west. Those jobs that make you create something from scratch, they haven't evaporated. Sure, it helps to know newer technologies, but the more important thing is being sure you can do what they're asking for, and then figure out a way to do it. This is the hacker ethos.



And then https://brr.fyi/ breaks the Internet!


You can also encode the special characters when writing to file and decoding after read.


Weird. In the same year (1999), I did pretty much the same thing (because strtok really made it easy to split a string) also for client input fields.

Only, I used the ASCII FS character (the Field Separator character) and everything worked brilliantly.



additionally: if a nickname has spaces, we have to type "%20" instead of spaces in the links with "/@{{username}}"

i've submitted an entry about this a few minutes ago. https://eksisozluk1999.com/entry/143247963



*months, not minutes. sorry for autocorrect (i wasn't using english keyboard.)


It should not have to be said, but (especially in in the West) we tend to forget about it:

Turkey has more inhabitants than the most populous country in western Europe (Germany). Turkey is also significantly larger than the largest country in western Europe (France).

When it comes to the number of Internet users it is on par with Germany and beats all other western European countries.



True. I think the number of Internet users in Turkey has surpassed 70 million. Eksi Sozluk used to receive 30+ million unique visitors monthly before it got banned.


Using Apple’s translate function I was able to read many of the posts - very interesting to see the differences between American and Turkish social media.

There were many posts about cats and their livelihoods and protection. Love that



> There were many posts about cats and their livelihoods and protection. Love that

Turks have a wonderful relationship with cats, especially in Istanbul: https://en.wikipedia.org/wiki/Feral_cats_in_Istanbul

There is a nationwide no-catch and no-kill policy for feral cats.



There’s a theory that cats mostly domesticated themselves; human settlements and their large grain stores proved to be a reliable source of rodents for them to hunt, and the humans tolerated the cats because they kept the rodent problem in check, but these cats would have lived a semi-domesticated lifestyle around human settlements without initially being kept as household pets. Maybe the feral cats of Istanbul are the closest modern approximation to this.


Definitely in line with the axiom of sits where fits


When I was a kid, that still used to be the norm on farms. There were farm cats and house cats. The farm cats were there to kill rodents and otherwise minded their own buisness. You could not just take them up, they would have bitten you. I think this has gotten out of style, as I have not seen the division nowdays and all cats seem to be gotten tame.


Barn cats are still a thing, but they are typically still owned and kept whereas I was talking more about free roaming cats that live around human settlements. The early free roaming cats would have been about as tame as barn cats; my impression is that the cats of Istanbul are more friendly.


Would love to see that implemented here in USA. People in places like NYC love to catch and spay every cat they see, then go on to complain about too many rodents around.


Rodents are an issue of trash and food left around, not a problem of not enough cats.

Cats shit in my garden and leave dead songbirds where I grow food. No, we don’t need more cats.



Do you not think rodents are in your garden … where food is left around?


To be clear, your claims are:

1. An increase in the population of predators will have no effect on the population of the prey.

2. Rats don’t shit and/or aren’t attracted to food.

Are you speaking in jest or do you actually believe those? The fact that your second statement directly contradicts your first doesn’t help.



I'm very glad to hear that it's readable using a translator!

In fact, the community dynamics resemble Reddit a lot despite having significant differences in layout and format. Irony, sarcasm, harsh criticism are common yet tolerance of differing viewpoints is relatively high compared to other platforms where people just flock to their own bubble or just block everyone else who they don't agree with.

It's fun too, has a rich history spanning a quarter century, and has been quite influential.



Don’t forget to archive it with responsible parties, like for future history and anthropological research. It would be a shame to loose so much of public discourse, especially if it’s so influential.


It's been subject to many academical research projects[1], I lost count now. It even made it into books[2][3]. We used to provide raw text dumps to researchers, or they would crawl the site themselves. One interesting paper claimed that Eksi Sozluk users were able to detect earthquakes very early and reliably[4]. Internet Archive also hangs around the site, archiving here and there. But a proper and full archive should be preserved, I agree.

[1] https://scholar.google.com/scholar?lr=lang_en&q=eksi+sozluk&...

[2] https://www.routledge.com/The-Routledge-Companion-to-Global-...

[3] https://www.twitterandteargas.org/

[4] https://gc.copernicus.org/articles/4/69/2021/



Sedat, you’re a legend, and a machine, great to see you here or anywhere. Good luck with the legal challenges.


Thank you, Leon!


Oh, I did something similar. I built quite popular local (non-english language) gaming forum with an Access file hosted in a Windows server and a VBScript ASP file, which had just been released. That's the original version, before ASP.NET. I was 13 or 14 years old at the time and didn't know better. It was no SQLite, so I had some weird concurrency problems. On top of that I ran into the some size limit (was it 2GB?) pretty quickly, but at this point it was time to look for a bigger server and figure out real databases anyway.

It eventually stopped being popular under my administration, so I transferred the domain to some people around 1999. It was rebuilt with PHPBB or something and got a new life. It's still on, surprisingly.



Fascinating that our stories intersect so much. I later converted that Delphi code to ASP/VBScript because native Delphi code ran really slow on a new DEC Alpha AXP server because of emulation on the RISC architecture. ASP code was much faster despite being interpreted :) I found ASP way more practical too. Access was also my native next choice of database. Not very scalable, but day and night difference compared to a text file :)


I never really stopped to think about it, but ASP was indeed quite performant, considering it was all interpreted, running in late 90s shared-hosting hardware with very little RAM and super slow hard disks. The site got a few thousand active users and worked quite well, apart from the DB size limits.

Fast forward 10, 20, almost 30 years and I frequently encounter websites that struggle to work under the same load, even with expensive AWS bills, especially when working with Rails.

Perhaps ASP was performant because the site was a few orders of magnitudes smaller than anything you'd see today, even though it was full featured. Probably 1000x or 10000x smaller if I also include third-party libraries in the count. It was quite comparable to serverless/edge computing actually.



I hope things go back to normal soon. Good to see you here, Sedat. Cheers.


Thanks, Huseyin!


Why do dictators love to ruin old stuff?


I don't know, why do dictators love to ruin old stuff?


Because they think they oughtacrack.

(That's the best I've got, clearly need more crackers.)



Because they did Nazi its value.


“Who controls the past controls the future”


Is there a reason why they are not taking the 1999 version of the domain down?


Because the platform switched to it only last week, no other reason. It was on eksisozluk1923.com before that. The moment this new domain catches up on popularity, they would find an arbitrary reason to ban that too.


Let's hope things change after 2028. Optum kardesim.


I don't think that "elections" change the result of dictatorship


Çok iyi :)


How did you make a Windows executable work on the web?


Using CGI protocol on a Windows server. IIS (Windows' own web server) basically interfaces with executables by running them, feeding them HTTP headers, and server variables through environment variables, and gets the response HTTP headers and the body from their STDOUT. It's very inefficient, of course, since every request requires spawning a new copy of the executable, but it had worked fine in its first months :)

Here is a very simple example from the original sources: https://github.com/ssg/sozluk-cgi/blob/master/hede.pas



Don't sell yourself too short here, that's exactly how Perl/PHP works and that was defacto standard around the same vintage (and for a decade more).


Honestly, there's a lot of beauty in that simplicity, I can definitely imagine someone also wanting to work with mod_php in Apache as well (just a module for the web server).

That said, FastCGI and similar technologies were inevitable and something like PHP-FPM isn't much more difficult to actually run in practice.

Still, having a clearly defined request lifecycle in wonderful, especially when compared to something how Java application servers like Tomcat/Glassfish used to work with Servlets - things there have gotten better and easier too, but still...



Agree. I also loved the simplicity. It’s not that different from Serverless, if you look at it.

There is an HTTP server handling all the HTTP stuff and process launching (which is handled by API Gateway in AWS, for example), and the communication between it and the “script” just uses language or OS primitives instead of more complex APIs.

The 2000s were quite wild in how things changed… suddenly you have giant frameworks that also parse HTTP, a reverse proxy. At some point even PHP became all about frameworks.

I wonder if we wouldn’t have a more open, standardized and mature version of CGI/Serverless if it had been a more gradual transition rather than a couple of very abrupt paradigm shifts.



I imagine it ran server-side (on Windows).


Indeed. I think it's worth going a little deeper for those who perhaps aren't familiar with some of the underlying principles of the Web.

For starters, all the program does is receive requests (as text) over a TCP/IP connection. It replies over the same connection.

So writing a Web server in any language, on any OS is a trivial exercise. (Once you have the ability to read or write TCP/IP.

The program has to accept the input, calculate the output, and send the output.

If the input is just file names, then the program just reads the file and sends it. (Think static site).

The program may parse the file, and process it some more. It "interprets" code inside the file, executes it, and thus transforms the output. Think PHP.

In these cases a generic server fits the bill. Think Apache, IIS, nginx and so on.

The next level up are programs that are compiled. They generate the output on the fly, often with no, or little, disk interaction. This sort of program often uses a database, but might not. (An online Soduku game for example might do everything in memory.)

Again, any if the above can be built on any OS and written in any language with TCP support.



[flagged]



Among the millions of entries on the platform, not one single piece of content on the platform was presented as evidence for the ban decisions. Just ambiguous words or false claims.

Shouldn't it be straightforward to prove that Eksi Sozluk lacks "meaningful moderation"? Shouldn't it be a requirement for such a drastic action like banning the whole web site?

Twitter produces orders of magnitude more disinformation in volume, amplified way faster and way broader too, yet they don't get any ban from Turkey whatsoever. How do you explain this kind of double standard?



It should be pretty straightforward for you to show you have any moderation whatsoever, I dont believe you do. The whole of the site is full of rubbish.

If I were the Turkish government I would never even do the favour by banning the site, cause that draws attention the site doesn't deserve. I dont care if it's most visited site whatever, it's just useless.



Well, whenever we enter an evidence war like this we must go back to the old standard. The burden of proof is on the accuser. It has to be. If the burden of proof is always on the defendant, all you need is 30 people making accusations and it becomes impossible to defend against. It's basically a legal DDOS.

Also, i think every logical person can see that its much much easier to provide a single example of a lack of moderation than it is nebulous "prove you have moderation". What kind of standard do you have for that and how do we know you're not going to shift the goalpost the moment they bring you what you ask for. An example is an example is an example. Provide your proof or cease accusations. I've seen this argument many times in my country, always used to shut down free expression and enforce repression. There's great books and videos on logical fallacies out there



Lol no need for an essay, I didnt mean the service provider has to prove they have moderation to the officials. I meant just here, it would be easy to just say the site has moderation. I don't believe there is, which means it's a dumpster ground with everyone posting all sorts of trash. Which, by the way, is another reason the site shouldn't even warrant any attention, but obviously government officials are stupid to even bother.


Reframing what you just said, you think people should prove they are innocent against any accusation because it would be "pretty straightforward"?

Either you like authoritarian governments or you have it in for this website (or both?)



Based on your statements here, it sure seems like you care. What motivates you to claim otherwise?


Funny you say that because Eksi Sozluk is the probably the only mainstream social platform to have site moderation logs open to all its users. There's a link to it under every page on the site. See it for yourself: https://eksisozluk1999.com/modlog

No other popular social platform has this level of transparency. Not Twitter, not Reddit, not Facebook. How did they prove they have moderation to Turkish government? How do they not get banned?

Do you know?

Do you want to know?

Do you care about what's right or fair, or do you just believe in something and want to justify your beliefs in whatever means necessary?



Don't try to normalize Aladdin by saying everything he does is Aladdin. He is such an Aladdin you can't defend him by saying he's Aladdin!!


Got any problematic examples?


Back in the '90s I consulted at HBO, and they were migrating from MS Mail on Mac servers to MS Exchange on PCs. Problem was that MS Mail on the Mac had no address book export function, and execs often have thousands or even tens of thousands of contacts. The default solution was for personal assistants to copy out the contacts one by one.

So I experimented with screen hotkey tools. I knew about QuicKeys, but its logic and flow control at the time was somewhat limited. Enter which had a full programming language.

I wrote and debugged a tool that:

   1. Listened to its own email box: [email protected]
   2. You emailed it your password (security? what security?)
   3. Seeing such an email, it logged out of its own email and logged in to yours.
   4. Then it opened your address book and copied out entries one by one. 
   5. It couldn't tell by any other method that it had reached the end of your address book, so if it saw the same contact several times in a row it would stop.
   6. Then it formatted your address book into a CSV for importing to Exchange, and emailed it back to you.
   7. It logged out of your account, and back into its own, and resumed waiting for an incoming email.
This had to work for several thousand employees over a few weeks. I had 4 headless pizza box Macs in my office running this code. Several things could go wrong, since all the code was just assuming that the UI would be the same every time. So while in the "waiting" state I had the Macs "beep" once per minute, and each had a custom beep sound, which was just me saying "one" "two" "three" and "four". So my office had my voice going off an average of once every fifteen seconds for several weeks.


The voice thing is hilarious. Thanks for sharing.


I did a similar thing in the Win9x days. I had some sound alert going off once in a while and I couldn't figure out what was causing it, worse, I didn't even recognize the sound. (It wasn't the standard "ding" or "chord".)

And when I went into the Windows sound scheme configurator, it had wacky names for some events like "asterisk" and "critical stop", with no explanation of what might trigger them.

So as a first step of narrowing it down, I made self-explanatory sounds for everything: I just recorded my voice saying "open program", "program error", "restore down", "exclamation", and so on, through the whole list, and assigned each sound to its respective event. There were a lot of them!

(Mind you, it was all the rage at the time to have whole collections of funny sounds assigned to all this stuff, movie lines and SFX and what-not, so there were these subtle games of one-upmanship to have a cooler sound scheme than anyone else.)

Not me. I had created the world's most humorless sound scheme. The only possible improvement would've been Ben Stein voicing the whole thing.

But in doing so, after a while, it took on this air of absolute hilarity. Like here's this machine that's capable of anything, it could make a star-trek-transporter sound, but there's just some guy's voice saying "empty recycle bin" with a flat, bored affect.



Did you ever figure out what was causing the sound?


Nope. And every time I asked "is there some sort of program that just logs every time another program makes a sound?", I was told it was deeper system-level magic than anyone sane would ever attempt.


Cole ExPorter. lol.


In the early days of Google Chrome, I was tasked with making it work with Windows screen readers. Now, accessibility APIs on Windows were documented, but web browsers used a bunch of additional APIs that were poorly documented. Chrome's design was radically different than Firefox's and IE's, so it was a challenge to implement the APIs correctly. At first I was just trying to get it to work with completely static web pages.

Screen readers were reading form controls, but no matter what I did they weren't activating any of their web-specific features in Chrome. I spent weeks carefully comparing every single API between Firefox and Chrome, making the tiniest changes until they produced identical results - but still no luck.

Finally, out of ideas, I thought to build Chrome but rename the executable to firefox.exe before running it. Surely, I thought, they hadn't hard-coded the executable names of browsers.

But of course they had. Suddenly all of these features started working.

Now that I knew what to ask for, I reached out and made contact with the screen reader vendor and asked them to treat Chrome as a web browser. I learned another important lesson, that I probably shouldn't have waited so long to reach out and ask for help. It ended up being a long journey to make Chrome work well with screen readers, but I'm happy with the end result.



I had a similar problem with Nvidia drivers. My boss purchased some laptop with 3D glasses and drivers and wanted me to get our custom software working on it for a cool demo.

I read the docs, implemented the right calls, but no matter what I did, I never could get the stereo to kick in. Finally, I renamed our app to doom.exe based on an obscure forum post somewhere, and it immediately worked.

I think the driver had whitelist of games it worked with, and only enabled for them.



That is a good story :) It makes me wonder what future Wayland protocol may enable clients/apps to advertise themselves as screen-reader capable.


Lynx is my most-used/favorite browser. I get variations daily of "your browser is unsupported" messages that almost always go away, without website degradation^1, once I have lynx self-report as "Mozilla" - argh, white-listing.

^1 Excepting javascript, of course.



Love the lesson here. Don’t wait to reach out for help!


15+ years ago, I was working on indexing gigabytes of text on a mobile CPU (before smart phones caused massive investment in such CPUs). Word normalization logic (e.g., sky/skies/sky's -> sky) was very slow, so I used a cache, which sped it up immensely. Conceptually the cache looked like {"sky": "sky", "skies": "sky", "sky's": "sky", "cats": "cat", ...}.

I needed cache eviction logic as there was only 1 MB of RAM available to the indexer, and most of that was used by the library that parsed the input format. The initial version of that logic cleared the entire cache when it hit a certain number of entries, just as a placeholder. When I got around to adding some LRU eviction logic, it became faster on our desktop simulator, but far slower on the embedded device (slower than with no word cache at all). I tried several different "smart" eviction strategies. All of them were faster on the desktop and slower on the device. The disconnect came down to CPU cache (not word cache) size / strategy differences between the desktop and mobile CPUs - that was fun to diagnose!

We ended up shipping the "dumb" eviction logic because it was so much faster in practice. The eviction function was only two lines of code plus a large comment explaining all the above and saying something to the effect of "yes, this looks dumb, but test speed on the target device when making it smarter."



This reminds me of something I encountered when working on surgical training simulators about ten years ago.

There was a function which needed to traverse a large (a few million vertices) mesh and, for each vertex, adjust its position to minimise some measurement.

The original code, written by a colleague, just calculated in which direction to move it and then, in a loop, made changes of decreasing magnitude until it got close enough.

This function was part of a performance bottleneck we had to solve, so I asked my colleague why he hadn't solved it analytically. He shrugged and said he hadn't bothered because this worked.

So, I rewrote it, calculating the exact change needed and removing the loop. My code took twice as long. After analysing why, I realised with his heuristic most triangles required only 1 iteration and only a handful required at most 3. This was less work than the analytical solution which required a bunch of math including a square root.



Similarly, a modder recently found that unrolling loops _hurt_ performance on the N64 because of RAM bus contention: https://www.youtube.com/watch?v=t_rzYnXEQlE


Unrolled loops can also often hurt for the same reason on big server chips. It's not always clearly good to unroll your loops.


Those are my favorite functions! Two lines of code with a page of text explaining why it works.


... how does doing a full string dict lookup take less time than just checking a few trailing characters in a trie? For indexing it's okay to be aggressive since you can check again for the actual matches.


We used a JIT-less subset of Java 1.4 on that device. Hashing of word-length strings in the underlying optimized C code was extremely fast and CPU cache friendly (and came with the JVM). With the simple cache in place, indexing time was dominated by the libraries that extracted the text from the supported formats. So, in line with this Ask HN's topic, it was good enough. And less code to maintain. And easier for engineers after me to understand. A good tradeoff overall.

More technical details for the curious...

Earlier I had done a quick trie implementation for other purposes in that code, but abandoned it. The problem is that we had to index (and search) large amounts of content in many different languages, including Chinese and Japanese with full Unicode support. This means that there is such large potential fan-out / sparsity within the trie that you need faster lookups / denser storage at each node in the trie (a hash map or binary search or ...). In that situation, a trie can be much slower than a single hash map with short strings as keys. Especially in a JIT-less JVM (the same code had to run server-side, where native extensions weren't allowed). If we were only dealing with ASCII, then maybe. And there would also be more complexity to maintain for decades (you can still buy newer versions of the device today that are running the same indexing and search code).

All those languages were also the reason that normalization needed caching. In v1, we were English only. I hand rolled a "good enough" normalizer that was simple / fast enough to not need caching. In v2 we went international as described above. I wasn't capable of hand rolling anything beyond English. So we brought in Lucene's tokenizers/stemmers (including for English, which was much more accurate than mine). Many of the stemmers were written in Snowball and the resulting Java code was very slow on the device.



> since you can check again for the actual matches.

Can you explain this?



An aggressive stemmer might stem both "generic" and "general" to "gener".

Then if your query is "what documents contain 'generic'?", you look in the index for "gener" and then open each of those documents and check if it actually has "generic" using a stricter stemmer (that accepts generic{,s}, genericness{,es}, genericit{y,ies}, generically ... this is a bit of a bad example since they all have the prefix directly). The cost is acceptable as long as both words have about the same frequency so it doesn't affect the big O.

Of course if you have any decent kind of compute, you can hard-code exceptions before building the index (which does mean you have to rebuild the index if your exception list changes ... or at least, the part of the index for the specific part of the trie whose exception lists changed - you don't do a global lookup!) to do less work at query time. But regardless, you still have to do some of this at query time to handle nasty cases like lie/lay/laid (searching for "lie" should not return "laid" or vice versa, but "lay" should match both of the others) or do/does/doe (a more obviously unrelated example).



> and then open each of those documents

That alone ruled out doing anything like this on the device I'm talking about. The goal, which we reached, was to be able to search 1,000 documents in 5 seconds. Opening a document took nearly a second given the speed of our storage (a few KB/s). The search itself took about a second, and then we'd open up just enough of the documents to construct search result snippets as you paged through them.



Gosh this story makes me lament the state of our field.

If the current gen of devs were to build this, it would all be done "on the cloud" where they can just throw compute at the problem, and as long as the cost was less than 5$ per month they wouldn't care. That's the problem of the product managers, marketing execs and VCs.



I know exactly what you're talking about. The product manager on the project described above added little value. Luckily, they were so ineffective that they didn't get in the way often. I've had others who were "excellent" at getting in the way.

That said, three of the most impressive people I've ever known are a former marketing exec and two former product managers, all of whom now work in VC. In their former roles, each helped me be the best engineer I could be. The people in their current VC portfolios are lucky to have them as advisors. What makes them so good is that they bring expertise worth listening to, and they clearly value the technical expertise brought by engineers. The result is fantastic collaboration.

They are far from typical, but there are truly great ones out there. Losing hope of that might make it more difficult to be aware of the good fortune of working with one, and maximizing the experience. My hope is that every engineer gets at least one such experience in their career. I was lucky enough to experience it repeatedly, working with at least one great one for about half of my 30-year career.



This lament is about as interesting as complaining about kids not knowing how to use rotary phones.


I had a database that was in a boot-crash loop because it had a corrupted innodb_history_list for a given table.

Everything would be ok if we could just delete the table, but that would involve opening a session and that wouldn’t be possible because of the immediate crashing.

On a whim I thought, “well what if I just have a really short amount of time to connect before it reboots?” So I opened up 5 terminal windows and executed “while true; do mysql -e ‘drop table xyz;’ done” in all of them

After about 10 minutes one of the hundreds of thousands of attempts to connect to this constantly rebooting database succeeded and I was able to go home at a reasonable time.



This is why remote work is so important. You could have been home the entire time.


This actually highlights a negative aspect of remote work. When you work from home, it is easy to lose track of time and ending up working the whole night. Here GP had a clear motivation, solve the problem in time to get back home and presumably disconnect.

That's why I actually like to work on site on Fridays. Because I know that when I leave the office, I am done for the weekend. And if I stay for too long, security will kindly remind me that the office is closing and I should leave. So laptop turned off, in the bag, and it stays there for the weekend. Even better if Monday is also on site, since I can just leave the laptop in the office, locked away.

It is a psychological trick, but it works for me. Your mileage may varry.

On a more technical note, don't assume the database can be administered over the internet/VPN. Real private networks still exist.



To play devil's advocate, had OP not wanted to go home so badly Parkinson's Law would've kicked in and OP may have tried to do things the "right way" which may have taken much longer.


Also if the DB was on-prem, then the latency when connecting from home might have been too high for the hack to work.


Any ideas on what the "right way" would be in this case? To me the solution seems the most straightforward.


Drop the table from some sort of safe mode, or figure out the bad entry in the table and hex edit the file to exclude it, or find/write some sort of recovery/fsck problem for the particular database flavor in question. Those are three alternatives that come to mind for me, which is to say, I wouldn't have thought to spam the db like that. Neat trick!


Fixing a CD drive with Polish Kielbasa:

The CD drive in my first computer broke. We couldn't afford to get a new one, and after almost a year of using floppies I got a bit tired of having to carry them across the mountains every time I wanted play a new game. (context: I lived in a small village in southern Poland at the time -- imagine Twin Peaks, but with fewer people and bigger hills). Sometimes getting a copy of quake or Win 95 took several trips back and forth as I didn't have enough floppies and the ones I had would get corrupted.

I turned 10 and finally decided to disassemble the drive and try to fix it. I found the culprit, but I realised that I needed a lubricant for one of the gears. In that exact moment my little brother was just passing by our "computer room", eating a bun with kielbasa (the smoky, greasy kind which is hard to find outside of PL). I put some of that stuff on a cotton swab, lubricated the gears in the drive, and magically fixed it. The drive outlived my old computer (may it rest in pieces). I miss a good Kielbasa Wiejska.



I have a similar story, except instead a CD drive and kielbasa i had a floppy drive (that was on an XT clone) and i used oil... that is olive oil :-).

It worked perfectly for years after that.



Reminds me of my brother where he would bring 6-7 floppies to a cafe just to download an anti-virus update.


This is glorious.


Animal fats make a very good lubricant if the temperature of the parts doesn't rise too high.

Once upon a time, car transmissions used whale oil.



My favorite one is probably when I was working at a retail Forex where consumers would try to make money on currencies. There were a lot of support calls where they disputed the price they saw and the price their order was entered. My solution was to log the price when they click the trade button. The interesting bit wasn't that I logged the currency pair and price, instead I did a tree walk of all the Java Swing GUI elements in the open trade window and render them into the log file as ASCII using "(o)" for options, "[x]" for checkboxes, "[text_____]" for text fields, etc. I wasn't sure if it would work as the elements were rounded to the closest line, and sometimes just inserted a line between two others if it was close to half a line in-between etc.

The ASCII 'screenshots' came out beautifully. From then on when a call came it, we told them to use the view log menu item, scroll to the trade time, then they'd shut up quick. A picture is worth a 1000 words indeed.



I wanted a smart thermostat but my 30 year old natural gas heater didn‘t support them. I only had a wheel which I could turn to set the temperature.

So I took double sided tape, stuck a plastic gear on the wheel and put a servo on with another gear on the side, connected to a raspberry pi, that would turn the wheel when my phone would enter a geofence around the flat.

Picture: https://ibb.co/nDvwndp

I even had a bash script to calibrate the servo which would turn the wheel and ask which temperature it set, so it could figure out the step size.



I've been smartifying dumb devices at home as well and came up with similar rubegoldbergesque solutions, although ultimately I didn't need to actually implement any of these as I found other ways to achieve my goals.

One of them involved pointing an old webcam to a segment display, convert shitty image to monochrome in a way that makes vague shapes for the digits and other state icons but clamps everything else to oblivion, and just use some fixed pixel position to "read" and get device state from that.

Made some fun prototyping though.

Also reminded me of: https://thedailywtf.com/articles/ITAPPMONROBOT



This reminds me of an old Bosch clock I own. This was one of the early electric consumer clocks. It looks very futuristic. But inside, it is simply a mechanical clock with an electric motor attached to the wind-up mechanism. Every few minutes, the motor spins up for a second and winds the clock up again.

https://www.youtube.com/watch?app=desktop&v=0DU0KX9gIk8

The clock is extremely reliable, though. The last batteries lasted for 10 (!) years.



This doesn't really qualify as "stupid" though.

That's just the good old "interfacing with legacy systems" routine. :)



There are whole product ranges for stuff like this now (stuff, where you cannot change the interface that is built for fingers and not automation).

eg:

https://www.youtube.com/watch?v=6RJ-zWJcEKc (not affiliated, cheaper models available on aliexpress, both bluetooth and zigbee) - you stick it on somewhere and it pushes a button for you. With added accessories, it can even push (technically pull) the (eg.) light switch the other way, so you only ned one switch per light switch. You can also use it to restart a server/pc, press a tv button remote or even a garage/ramp opener, etc., with zero electronic knowledge and modification (if you're renting out and don't want to replace stuff).



I just took a 24v AC wall wart power supply and shoved it in the same terminals as the hot and neutral control on the smart thermostat. The A in AC makes this arrangement work just fine to allow the battery of the thermostat to charge without zapping anything.


Brilliant


When I was younger I learned that Sed was turing complete. So I did was any young woman would do. I build an entire search engine in Sed. But it wasn't like a useless little search tool that provided bad search capability for a website, no, nearly every page (minus a few like the about page) of a site was nothing more than a presentation on top of the search query results. Several thousand hardcoded "known" pages and infinite possible pages depending on user searches. Because it was the foundation of the site, search worked unlike most websites searches of the era (~2005). This site happily ran for about a decade with surges of traffic now and then before a server migration and too little time prevented its continued existence.

Adult me is both horrified and impressed at this creation.



Did you have a file named “main.sed” or was everything a giant bash script that started with “sed $(cat


Wow. Out of everything, this is the most impressive.


I had a friend who wrote a minimal dbms in awk just because someone told him it couldn’t be done!


We have a production service running for years that just mmaps an entire SSD and casts the pointer to the desired C++ data structure.

That SSD doesn't even have a file system on it, instead it directly stores one monstrous struct array filled with data. There's also no recovery, if the SSD breaks you need to recover all data from a backup.

But it works and it's mind-boggingly fast and cheap.



I've always wanted a Smalltalk VM that did this.

Eternally persistent VM, without having to "save". It just "lives". Go ahead, map a 10GB or 100GB file to the VM and go at it. Imagine your entire email history (everyone seems to have large email histories) in the "email array", all as ST objects. Just as an example.

Is that "good"? I dunno. But, simply, there is no impedance mismatch. There's no persistence layer, your entire heap is simply mmap'd into a blob of storage with some lightweight flushing mechanic.

Obviously it's not that simple, there's all sorts of caveats.

It just feels like it should be that simple, and we've had the tech to do this since forever. It doesn't even have to be blistering fast, simply "usable".



That is so wonderfully fascinating to me. You could just download a file into a variable and when that variable goes out of scope/has no more references it’d just be automatically “deleted”. Since there’s no longer a concrete “thing” called a file, you can organize them however you want and with whatever “metadata” you want by having a dict with the metadata you want and some convention like :file as the key that points to the body. Arbitrary indexes too; any number of data structures could all share a reference to the same variable.

Simple databases are just made up of collections of objects. Foreign key constraints? Just make the instance variable type a non-nullable type. Indexes? Lists of tuples that point to the objects. More complex databases and queries can provide a set of functions as an API. You can write queries in SQL or you can just provide a map/filter/reduce function with the predicate written in normal code. Graph databases too: you can just run Dijkstra’s algorithm or TSP or whatever directly on a rich persistent data structure.

Thanks for the neat idea to riff on. I like it! Thinking about it in practice makes me a little anxious, but the theory is beautiful.



So, I've occasionally played around with a language that pretty nearly does this.

Mumps is a language developed in 1967, and it is still in use in a few places including the company where I work.

The language is old enough that the first version of it has "if" but no "else". When they added "else" later on it was via a hack worthy of this post: the "if" statement simply set a global variable and the new "else" statement checked that. As a result, "if-else" worked fine but only so long as you don't use another "if" nested within the first "if" clause (since that would clobber the global variable). That was "good enough" and now 50 years later you still can't nest "if" statements without breaking "else".

But this very old language had one brilliant idea: persistence that works very much the way you describe. Any variable whose name begins with "^" is persisted -- it is like a global variable which is global, not just to this routine but to all of the times we execute the program.

It is typical to create single variables that contain a large structure (eg: a huge list with an entry for each customer, indexed by their ID, where the entry contains all sorts of data about the customer); we call these "tables" because they work very much like DB tables but are vastly simpler to access. There's no "loading" or impedance mismatch... just refer to a variable.

Interestingly, the actual implementation in modern day uses a database underneath, and we DO play with things like the commit policy on the database for performance optimization. So in practice the implementation isn't as simple as what you imply.



That global persistence model across executions is very fascinating. If you don't mind, could you explain what line of work this is and how it helps the use case? I have encountered similar concepts at my old job in a bank, where programs could save global variables in "containers" (predates docker IIRC) and then other programs could access this.




This is what Intel Optane should have given us.

Non-volatile memory right in the CPU memory map. No "drives", no "controllers", no file allocation tables or lookup lists or inodes. Save to memory 16GB and it's there even through reboots.



It’s not Smalltalk but you might find OS/400 interesting for having a single level store for object persistence.

Old HN discussion with Wikipedia pointers: https://news.ycombinator.com/item?id=18907798



Isn’t that sort of the original idea for how Forth would work? Everything is just one big memory space and you do whatever you need?

I’m going from very hazy memory here.



I think it is, although you have to manually save the current image if you want to keep the changes you made. Which I find entirely reasonable.

I also think that what gp is looking for is Scratch. IIRC it's a complete graphical Smalltalk environment where everything is saved in one big image file. You change some function, it stays changed.



Arguably you could use GemStone/S like that, though it's probably not the kind of capabilities you want.


LMDB has a mode designed to do something similar, if anyone wants something like this with just a bit more structure to it like transactional updates via CoW and garbage collection of old versions. It's single writer via a lock but readers are lock/coordination free. A long running read transaction can delay garbage collection however.


Wow. How do design decisions get made that result in these types of situations in the first place?


Honestly it’s not too far off from what many databases do if they can. They manage one giant file as if it’s their own personal drive of memory and ignore the concept of a filesystem completely.

Obviously that breaks down when you need to span multiple disks, but conceptually it really is quite simple. A lot of the other stuff file systems do are to help keep things consistent. But if there’s only one “file“ and you don’t ever need metadata then you don’t really need that.

Very smart solution really.



Yeah, a lot of database storage engines use O_DIRECT because the OS's general purpose cache heuristics are inferior vs them doing their own buffer pool management. That said if you try this naively you're likely to end up doing something a lot worse than the Linux kernel.


Someone says "hey if we had 900GB of RAM we could make a lot of money" and then someone else says "that's ridiculous and impossib- hang on a minute" and scurries off to hack together some tech heresy over their lunch break.


By the way, you can find single servers with 32TB of ram nowadays.


If I had to guess:

Doing it this way = $

Doing it that way = $$$



it's a very reasonable thing to do if you need performance.


I love this one.

If anyone from AWS reads your comment, they could have an idea for a new "product" xD



Very interesting. Can you give a sense of the speed up factor?


Sounds fragile, C++ compilers are permitted to do struct padding essentially as they please. A change in compiler could break the SSDstruct mapping (i.e. the member offsets).

C++ arrays, on the other hand, are guaranteed not to have padding. That's essentially what memory-mapped IO gives you out of the box.

https://stackoverflow.com/a/5398498

http://www.catb.org/esr/structure-packing/#_structure_alignm...



I'm not familiar with C++ rules (the linked answer seems very suspect to me, their argument being "if you change the struct, the struct changes!"), but they could absolutely just be declaring them as extern "C" to use C's layout rules.


Linux had methods to avoid fysnc on filesystems and if you used an SSD and a SAI you would usually have no problems. Pixar used that to write GB's of renders and media for instance.


I do similar things with mmap and dumping raw structs to get insane speeds one wouldn't expect to get from traditional databases.

Perhaps you could even pause the operations, snapshot with dd and resume everything back in order to get a backup.



I used to work at a small company. We had a few remote embedded devices that did work and sent data back to the mothership over the internet. Their firmware could be remotely updated, but we were always very careful.

Well one day a mistake was finally made. Some of the devices went into a sort of loop. They’d start running the important process, something would go wrong, and they’d just retry every few minutes.

We caught the issue almost instantly since we were watching the deploy, and were able to stop updates before any other devices picked it up. But those that already got it were down.

We could ask the devices to send us the output of a command remotely, but it was too limited to be able to send back an error log. We didn’t have time to send back like 255 characters at a time or whatever, we needed to get it fixed ASAP.

And that’s when the genius stupid hack was suggested. While we couldn’t send up a full log file, we could certainly send up something the length of a URL. So what if we sent down a little command to stick the log on PasteBin and send up the URL to us?

Worked like a charm. We could identify what was going wrong and fix it in just a few minutes. After some quick (but VERY thorough) testing everything was fixed.



How you could have enough control over the machine to reroute the error log to (what I assume was) a Pastebin api, while also lacking access to any of the files on the machine? In my mind you'd be required to ssh into the machine to upload, and if you're ssh'd in, why not just cat the log?


Good question! We couldn’t SSH in, which is too bad this would all be trivial. We had no direct access to the boxes, they were often behind firewalls. In fact that was the suggested placement for security reasons. They weren’t full servers, just little embedded things.

We had a little HTTP API that it was always talking to. It would call the API to send data back to us or just check in regularly, and we would return to it a little bit of status stuff like the current time to keep clocks in sync, and a list of which “commands” they need to run.

Mostly the commands were things like “your calibration data is out of data, pull an update“ or “a firmware update is available“.

But one of them let us run arbitrary shell commands. The system was very limited. I wasn’t a developer directly on the project but I think it was just our custom software plus busy box and a handful of other things our normal shell scripts used. I assume it had been added after some previous incident.

I believe the basic idea was that during troubleshooting you could tell a box to return the output of “cat /etc/resolv.conf” or something else that we hadn’t preplanned for without having to send someone into the field. But since it was only for small things like that it couldn’t return a full file.

Luckily one of the commands was either curl or wget. So we could send down “curl -whatever /log/path https://pastebin/upload” or whatever it was. I don’t remember if we signed up for a pastebin account so we knew where it would show up or if we had it return URL to us in the output of the curl command.

This suggestion was literally a joke. We were all beating our heads against the wall trying to help and someone just said “why don’t we just stick it on pastebin“ out of frustration, and the developer on the project realized we had what we needed to do that and it would work.



I was doing some proxy soak testing for a company once where we had to run the tests from the server room but there was no non-proxy connectivity from that room to where we were monitoring the tests. Simple solution: output the progress to Dropbox, watch the same file upstairs. Bit of delay, sure, but better than having no idea how things are going until the 30-60min test is done (and no, we weren't allowed to sit in the server room watching it.)


> In my mind you'd be required to ssh into the machine to upload, and if you're ssh'd in, why not just cat the log?

Ssh on remote IoT class devices is works. The problem is rarely ssh, the problem is always some form of key management plus NATs in-between.

If you've got a few thousand devices in the field, public key management can become a a real pain, especially when you want to revoke keys.



I’ve worked at a company where our remote access was over a super slow modem line but the machine did have access to the internet.


Your company had remote embedded devices but didn't keep one "locally" for debugging issues?


There's a zillion things that you can't necessarily test for locally. When you have a fleet of IoT devices deployed in other people's environments, there's literally no way to test everything.

Your question comes down to "Why didn't you just deploy bug free code using perfect processes? That's what I always do."

I mean, cmon.



I never said they should test for everything.

OP's description suggests it was an error that's common to all the deployed instances that received the update, rather than some specific combination of environment and that deployment.

It would have allowed them to run the same deployment locally and use physical access (serial, a display, sd card, whatever) to capture the error log.

What they came up with is clever but it's very surprising that they needed it, especially given that they have very limited remote access to the units that are in the wild.



After all this time I don’t remember what the bug was. We did have boxes locally that we tested on of course. But somehow this got out.

It might’ve been something that only showed up under certain configurations. It might’ve been something that just should have been caught under normal circumstances and wasn’t for some reason. It may have been something that worked fine tested off-hardware until some bug in packaging things slightly changed it and broke a script. Or it could’ve been a case of “this is trivial, I know what I’m doing, it will be fine“.

We were a very small operation so I’m not going to say that we had an amazing QC process. It may have been a very human mistake.



> I know what I’m doing, it will be fine

I don't think anyone is truly a programmer until they've learnt the hard way the outcome of "what could go wrong"!

Thanks for the update, happy holidays mate!



Sure! I’ve learned that lesson the hard way a couple of times myself.


No, they said some.


Emphasis is mine:

> We caught the issue almost instantly since we were watching the deploy, and were able to stop updates before any other devices picked it up. But those that already got it were down.



You’re correct. I think every box that got it started having problems (or at least most), the only reason any were still up is that updates were scattered in case of think kind of incident (and to avoid hammering out poor little server).


As a 12 year old: I tried to overclock my first "good" own computer (AMD Duron 1200 MHz). System wouldn't start at 1600 MHz and I didn't know BIOS reset exists. I ended up putting the computer in the freezer and let it cool down for an hour. I placed the CRT display on top and the power/VGA keyboard cable going into the freezer. I managed to set it back to the original frequency before it died.


I kept a supply of coins in the freezer. I would regularly toss a few into the heatsink on my TRS-80 that was unstable after a RAM upgrade.


You guys are really smart. When I was a kid I had a graphics card that would overheat and crash the computer when I played Lineage. So I would get down under my desk and blow on it...


Once my phone died from a cracked solder joint. I had cold veggie sausages in the hotel room fridge. Holding my phone against the sausages let me grab a couple more files off of it. Saved my OTP keys that way (I've fixed my backups now :) )


When I was a teenager my friend would throw his laptop into the freezer for a few minutes every hour when we were playing games. He probably threw it in there hundreds of times, and it worked fine for years.


A friend had an overheating laptop that even an external cooler couldn’t keep up with so he got to sit right next to the open door in winter.

We called it the Frozen Throne.



I don't know why but this reminds me of how we picture-framed my friend's old Wifi chip after replacing it, because that chip failing all the time was basically the core feature of our group's gaming sessions.


Hahah this is amazing!


As a kid I acquired my parent’s login to the school platform meaning I could call in sick myself. However one day I actually got sick so they had to call it in which means they would’ve seen all previous calls.

So I downloaded the HTML for all pages required for this exact flow and removed the previous sick days. I then changed my etc/hosts file, gave them my computer and prayed that they wouldn’t try to visit any other page than the ones I downloaded.

Worked like a charm. Later I called in sick myself.



That’s good. I took AP computer science in high school thinking we would learn how to make apps with graphical user interfaces (didn’t know there was any other kind) so I was uninterested and never payed attention/studied since when we only learned Java and not the kind for making UI’s.

I had a friend whose mom was a programmer and she would help us get the answers to homework problems. I would change variable names and a few other things, but one time we still got caught with way too similar answers.

In order not to get my friend and his mom in trouble, I told the teacher that I had in fact cheated but not from my friend. I stayed up for a few nights learning HTML and other things (well, modifying other code I found) in order to make a blog where a similar question as the homework problem was discussed, so that I could cite it as my source. At first I tried using blogging software but the timestamps are automatically coded as the current date, which wouldn’t work. So I had to make my own blog-looking site from scratch complete with several months worth of content, which was pretty much plagiarized I think.

It was way more effort than actually doing the homework would have been.



Absolutely brilliant

I used to do the same with school report cards, which began being delivered electronically when I was in Middle School ;^)



Have you ever told them?


I implemented an enterprise data migration in javascript, running in end-user's browsers. (So no server-side node.js or such.)

It was a project scheduled for 2-3 months, for a large corporation. The customer wanted a button that a user would click in the old system, requesting a record to be copied over to the new system (Dynamics CRM). Since the systems would be used in parallel for a time, it could be done repeatedly, with later clicks of the button sending updates to the new system.

I designed it to run on an integration server in a dedicated WS, nothing extraordinary. But 3 days before the scheduled end of the project, it became clear that the customer simply will not have the server to run the WS on. They were incapable of provisioning it and configuring the network.

So I came up with a silly solution: hey, the user will already be logged in to both systems, so let's do it in their browser. The user clicked the button in the old system, which invoked a javascript that prepared the data to migrate into a payload (data -> JSON -> Base64 -> URL escape) and GET-ed it in a URL parameter onto a 'New Record' creation form into the new system. That entire record type was just my shim; when its form loaded, it woke another javascript up, which triggered a Save, which triggered a server-side plugin that decoded and parsed the data, which then processed them, triggering like 30 other plugins that were already there - some of them sending data on into a different system.

I coded this over the weekend and handed it in, with the caveat that since it has to be a GET request, it simply will not work if the data payload exceeds the maximum URL length allowed by the server, ha ha. You will not be surprised to learn the payload contained large HTMLs from rich text editors, so it did happen a few times. But it ran successfully for over a year until the old system eventually was fully deprecated.

(Shout out to my boss, who was grateful for the solution and automatically offered to pay for the overtime.)



That’s horrible. I love it!

I’m not quite sure I understand why it was GET though. No way of running something like fetch or (more likely) XMLHTTPRequest?



I think the OP (hat off!) needed a way to transfer data to the front-end of another application. Since there's no back end involved, the only available channel is the request URL


> Since there's no back end involved, the only available channel is the request URL

Not quite. I have a system that uses a custom userscript to create an extra button on certain webpages that, when clicked, scrapes some data from the current page and copies a lightly encoded version to the user's clipboard. They then switch to another webpage and paste it in a box.

I've also gotten data from place to place using scraping from temporary iframes (same site).



That guess was actually quite close. The target system does support the GET way out of the box as a way to pre-fill data into a form; but only over GET.


Oh that would make sense. Thanks for the guess.


I had an old boiler that would sometimes trip and lock out the heat until someone went down and power cycled it. (It was its own monstrous hack of a gas burner fitted to a 1950s oil boiler and I think a flame proving sensor was bad.)

Every time it happened, it made for a long heat up cycle to warm the water and rads and eventually the house.

So I built an Arduino controlled NC relay that removed power for 1 minute out of every 120. That was often enough to eliminate the effect of the fault, but not so often that I had concerns about filling too much gas if the boiler ever failed to ignite. 12 failed ignitions per day wouldn’t give a build up to be worried about.

That ~20 lines of code kept it working for several years until the boiler was replaced.



I have a similar one.

Our boiler has a pump to cycle hot water around the house - this makes it so you get warm water right away when you turn on a faucet and also prevents pipes in exterior walls from freezing in the winter.

This stopped working, the pump is fine but the boiler was no longer triggering it.

I just wired up mains through an esp32 relay board to the pump and configured a regular timer via esphome.

Temperature based logic would be even better but I didn't find a good way to measure pipe temperature yet.



I eventually switched to an ESP32 and added temperature graphing: https://imgur.com/a/VM7nD74

IIRC, I used an RTD that I had left over from a 3D printer upgrade, but an 18B20 would fine as well. A 10K NTC resistor might even be good enough. For what I needed (and I think for what you need), just fixing the sensor to the outside of the pipe [if metal] will give you a usable signal. That sensor was just metal HVAC taped to the front cast iron door of the burner chamber.

But a dead-simple timer solution gets you pretty far as you know.



The pipes are insulated and I didn't want to cut into that, but maybe a small hole for a sensor wouldn't be too bad.

But as you say, timer works good enough and that means little motivation to continue to work on it -- countless other projects await :)

BTW I've also tuned the timer to run for longer in the morning to get a hot shower ready.

Edit: nice dashboard, what are you using for the chart? I like the vintage look.



That is another somewhat hacky thing.

I have a mix of shame and pride that the chart (everything in the rectangle) is entirely hand-coded SVG elements emitted by the ESP web request handler.



I'm thiiiiiiis close to installing a circulating pump. I plan to power it off the bathroom lightswitch, which I might just replace with a motion sensor.


Couldn’t that be achieved with a mechanical timer switch and zero lines of code ?


Probably that doesn't give you small enough time increments


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com