![]() |
|
![]() |
| Oh christ you're right, they were actually being really funny. I was being super literal and imagined them being very excited about futuristic advances in giant isopod diagnosis and care |
![]() |
| In my thesis many years ago [0] I used EAs to build bicycle wheels. They were so annoyingly good at exploiting whatever idiosyncrasies in my wheel-simulator. Like, the first iterations of my simulator it managed to evolve wheels that would slowly oscillate due to floating point instability or something, and when applied forces to it would increase and increase until the whole simulator exploded and the recorded forces were all over the place, of course then out-competing any wheel in at least some objective dimension.
After fixing those bugs, I mostly struggled with it taunting me. Like building a wheel with all the spokes going from the hub and straight up to the rim. It of course would break down when rolling, but on the objective of "how much load can it handle on the bike" it again out-competed every other wheel, and thus was at the pareto-front of that objective and kept showing up through all my tests. Hated that guy, heh. I later changed it to test all wheels in at least 4 orientations, it would then still taunt me with wheels like (c) in this figure[1], exploiting that. [0]: https://news.ycombinator.com/item?id=10410813 [1]: https://imgur.com/a/LsONTGc |
![]() |
| How would more AI help? "given this goal with these parameters, figure out if another AI will ever game it into eventual thermonuclear war. "
Feels halting problem-esque. |
![]() |
| What claim would that be? It’s a hilarious, factual example of unintended consequences in model training. Of course they fixed the freaking bug in about two seconds. |
![]() |
| Some people are able to do frame perfect inputs semi consistently from what I understand. I don’t understand how, as my own performance is around hitting 100ms window once, every other time |
![]() |
| Oh sorry! I recognized the description but since I recalled mine being a Google Sheets link, I just went straight into search mode - and yep, it actually took me a bit to find. |
![]() |
| IIRC the flip-side was that it was hideously specific to a particular model and batch of hardware, because it relied on something that would otherwise be considered a manufacturing flaw. |
![]() |
| Relying on nuances of the abstraction and undefined or variable characteristics sounds like a very very bad idea to me.
The one thing you generally want for circuits is reproducibility. |
![]() |
| Looking at the actual article (https://www.nature.com/articles/s41467-024-54178-1), their procedure does actually use deep learning in the process of synthesizing candidate chip designs, and the use of deep learning is key to their work. In particular, it looks like the final process is a genetic algorithm that interacts with a deep learning model that predicts the performance of candidate chips. It seems like trying to simulate the chip analytically to predict performance was far too inefficient, and replacing that part with a deep learning model made this entire procedure possible. So in summary, nothing in this article is called AI that has not been called AI before. Most importantly it produces novel designs "globally" without a human in the loop whereas one was required before. I think calling that AI-designed is pretty reasonable.
On a very high level, the role of deep learning here seems similar to AlphaGo (which is also the combination of a less novel generic optimization algorithm, Monte Carlo tree search, with deep learning-provided predictions). I don't think anyone would debate that AlphaGo is fundamentally an AI system. Maybe if we are to be really precise, both of these systems are optimization guided by heuristics provided by deep learning. |
![]() |
| Optimization is near and dear to my heart (see username), but I think it’s fine to call optimization processes AI because they are in the classical sense. |
![]() |
| Once a computer can do something, it no longer called AI but just an algorithm.
At least, that used to be the case before the current AI summer and hype. |
![]() |
| If you have an agent in an environment, the program that controls its choices has pretty consistently been called AI even when it's a simple design.
But I'm skeptical of calling most optimizers AI. |
![]() |
| My point is that it’s equally ridiculous to call either AI. If our chip here is not the AI then the AI has to be the optimizer. By extension that means AdamW is more of an AI than ChatGPT. |
![]() |
| Prior to 2021/202-whenever, most sensible people called this stuff deep learning / machine learning etc. For over 15+ years it’s been called machine learning — “getting machines to complete tasks without being explicitly programmed to do so”.
since 2021/whenever LLM applications got popular everyone has been mentioning AI. this happened before during the previous mini-hype cycle around 2016-ish where everyone was claiming neural networks were “AI”. even though, historically, they were still referred to by academics as machine learning. no-one serious, who actually works on these things; isn’t interested in making hoardes of $$$ or getting popular on social media, calls this stuff AI. so if there were a wikipedia link one might want to include on this thread, I’d say it would be this one — https://en.m.wikipedia.org/wiki/Advertising because, let’s face it, advertising/marketing teams selling products using linear regression as “AI” are the ones shifting the definition into utter meaninglessness. so it’s no surprise people on HN, some of whom actually know stuff about things, would be frustrated and annoyed and get tetchy about calling things “AI” (when it isn’t) after 3 sodding years of this hype cycle. i was sick of it after a month. imagine how i feel! - edit, removed line breaks. |
![]() |
| Machine learning is a subfield of AI. Complaining about calling ML AI is like complaining about calling Serena Williams an "athlete" because she's actually a "tennis player" |
![]() |
| Yes, for low-frequency analog circuits these experiments go back to the 1990s at least.
J. R. Koza, F. H Bennett, D. Andre, M. A. Keane, and F. Dunlap, “Automated synthesis of analog electrical circuits by means of genetic programming,” IEEE Trans. Evol. Comput., vol. 1, pp. 109–128, July 1997. https://dl.acm.org/doi/10.1109/4235.687879 |
![]() |
| As I recall it didn’t even work from day to day due to variance in the power supply triggered by variance in the power grid.
They had to redo the experiment on simulated chips. |
![]() |
| And then you change temperatre/elevation/move it next to a router and it falls apart, because after all there is going to be something correlated. |
![]() |
| Thought tiny wireless antennas were already dark magic that people barely understood anyway was more trial and error. Feels like yet another so called science publication doing a clickbait headline. |
![]() |
| This is a strange distinction for the article to point out. If you want to take a more modular approach all you have to do is modify the loss function to account for that. It's entirely arbitrary. |
![]() |
| I wonder about security of such designed chips. We've been demonstrated that apparently optimal architecture can lead to huge errors that create security flaws (spectre, Pacman for M1 etc). |
![]() |
| Also see wok done on topological optimization. Mechanical designs no human would design, but AI not required either, just numerical optimization. |
![]() |
| >that pitfalls remain “that still require human designers to correct.” In particular, many of the designs produced by the algorithm did not work
So? Nothing. |
![]() |
| Um, Yea, this is another "works on my desk" type of development.
Has anyone considered debug? Of course the bot will do that too, right? |
![]() |
| People don't necessarily choose their own pronouns based on how it will reflect on an oppressed group, and they don't necessarily intend to be representing a group when representing themselves. |
If you’re up for sharing, I’m curious to know approximately how many hours each week you spend working on HN. It seems like it would be an enormous amount of time, but I’m just guessing.