![]() |
|
![]() |
| Open AI was betting on AI, AGI, Super inteligence.
Look at the google engineer who thought they had an AI locked up in the basement... https://www.theverge.com/2022/6/13/23165535/google-suspends-... MS paper on sparks of AGI: https://www.microsoft.com/en-us/research/publication/sparks-... The rumors that OpenAI deal with MS would give them everything till they got to AGI... A perpetual license to all new development. All the "Safety people" have left the OpenAi building. Even musk isnt talking about safety any more. I think the bet was that if you fed an LLM enough, got it big enough it would hit a tipping point, and become AGI, or sentient or sapient. That lines up nicely with the MS terms, and MS's on paper. I think they figured out that the math doesn't work that way (and never was going to). A prediction of the next token being better isnt intelligence any more than weather prediction will become weather. |
![]() |
| Better results, yes, but that doesn't mean good results. It can only find local optima in a predetermined state space. Training a neural network involves (1) finding the right state space, and (2) choosing a suitable gradient function. If the Correct Solution isn't in the state space, or isn't reachable via gradual improvement, the neural network will never find it.
An algorithm that can reason about the meaning of text probably isn't in the state space of GPT. Thanks to the https://en.wikipedia.org/wiki/Universal_approximation_theore..., we can get something that looks pretty close when interpolating, but that doesn't mean it can extrapolate sensibly. (See https://xkcd.com/2048/, bottom right.) As they say, neural networks "want" to work, but that doesn't mean they can. That's the hard part of machine learning. Your average algorithm will fail obviously, if you've implemented it wrong. A neural network will just not perform as well as you expect it to (a problem that usually goes away if you stir it enough https://xkcd.com/1838/), without a nice failure that points you at the problem. For example, Evan Miller reckons that there's an off-by-one error in everyone's transformers. https://www.evanmiller.org/attention-is-off-by-one.html If you add enough redundant dimensions, the global optimum of a real-world gradient function seems to become the local optimum (most of the time), so it's often useful to train a larger model than you theoretically need, then produce a smaller model from that. |
![]() |
| > But you should find their self-direction capacity incredible
No, why would I? Depending on what you mean by self-direction, that's either an evolved trait (with evolution rather than the mouse itself as the intelligence) for the bigger picture what-even-is-good, or it's fairly easy to replicate even for a much simpler AI. The hard part has been getting them to be able to distinguish between different images, not this kind of thing. > and their ability to instinctively behave in ways that help them survive and propagate themselves. There isn't a machine or algorithm on earth that can do the same, https://en.wikipedia.org/wiki/Evolutionary_algorithm > much less with the same minuscule energy resources that a mouse's brain and nervous system use to achieve all of that. Is nice, but again, this is mixing up the intelligence of the animal with the intelligence of the evolutionary process which created that instance. I as a human have no knowledge of the evolutionary process which lets me enjoy the flavour of coriander, and my understanding of the Krebs cycle is "something about vitamin C?" rather than anything functional, and while my body knows these things it is unconventionable to claim that my body knowing it means that I know it. |
![]() |
| Still passes the "a machine that would survive a single day" test, and given machines run off electricity and we have PV already food isn't a big deal here. |
![]() |
| > Intelligence implies a critical evaluation of the statement under examination, before stating it, on considerations over content.
And who says LLM are not able to do that (eventually)? |
![]() |
| Is there a good reference available that describes what happened with expert systems in the 80's? I'm only vaguely aware about such things but in my mind they seem to have some utility. |
![]() |
| > the term AI (...) has completely taken over
Maybe if you go by article titles it has. If you look at job titles, there are many more ML engineers than AI engineers. |
![]() |
| That is funny, because of all the problems with LLMs, the biggest one is that they will lie/hallucinate/confabulate to your face before saying I don't know, much like those leaders. |
![]() |
| Building something that replicates the abilities of the average human being in no way implies that this eventually leads to a superintelligent entity. And my broader point was that many people are using the term AGI as synonymous with that superintelligent entity. The concepts are very poorly defined and thrown around without much deeper thought.
> Are you aware that we have been using that term for at least 60 years? Yes, and for the first ±55 of those years, it was largely limited to science fiction stories and niche areas of computer science. In the last ±5 years, it's being added to everything. I can order groceries with AI, optimize my emails with AI, on and on. It's become exceptionally more widespread of a term recently. https://trends.google.com/trends/explore?date=today%205-y&q=... > And that the Brownian minds of the masses very typically try to interfere while we proceed focusedly and regarding it as noise? Today they decide that the name is Anna, tomorrow Susie: childplay should remain undeterminant. You're going to have to rephrase this sentence, because it's unclear what point you're trying to make other than "the masses are stupid." I'm not sure "the masses" are even relevant here, as I'm talking about individuals leading/working at AI companies. |
![]() |
| I work at Microsoft, though not in AI. This describes Copilot to a T. The demos are spectacular and get you so excited to go use it, but the reality is so underwhelming. |
![]() |
| Contrary to what the corporations want you to believe -- no, you can't buy your way out of every problem. Most of the modern AI tools are mostly oversold and underwhelming, sadly. |
![]() |
| With the most recent update, it's actually very simple. You need three things:
1) Add OpenAI Conversation integration - https://www.home-assistant.io/integrations/openai_conversati... - and configure it with your OpenAI API key. In there, you can control part of the system prompt (HA will add some stuff around it) and configure model to use. With the newest HA, there's now an option to enable "Assist" mode (under "Control Home Assistant" header). Enable this. 2) Go to "Settings/Voice assistants". Under "Assist", you can add a new assistant. You'll be asked to pick a name, language to use, then choose a conversation model - here you pick the one you configured in step 1) - and Speech-to-Text and Text-to-Speech models. I have a subscription to Home Assistant Cloud, so I can choose "Home Assistant Cloud" models for STT and TTS; it would be great to integrate third party ones here, but I'm not sure if and how. 3) Still in "Settings/Voice assistants", look for a line saying "${some number} entities exposed", under "Add assistant" button. Click that, and curate the list of devices and sensors you want "exposed" to the assistant - "exposed" here means that HA will make a large YAML dump out of selected entities and paste that into the conversation for you[0]. There's also other stuff (I heard docs mentioning "intents") that you can expose, but I haven't look into it yet[1]. That's it. You can press the Assist button and start typing. Or, for much better experience, install HA's mobile app (and if you have a smartwatch, the watch companion app), and configure Home Assistant as your voice assistant on the device(s). That's how you get the full experience of randomly talking to your watch, "oh hey, make the home feel more like a Borg cube", and witnessing lights turning green and climate control pumping heat. I really recommend everyone who can to try that. It's a night-and-day difference compared to Siri, Alexa or Google Now. It finally fulfills those promises of voice-activated interfaces. (I'm seriously considering making a Home Assistant to Tasker bridge via HA app notification, just to enable the assistant to do things on my phone - experience is just that good, that I bet it'll, out of the box, work better than Google stuff.) -- [0] - That's the inefficient token waster I mentioned in the previous comment. I have some 60 entities exposed, and best I can tell, it generates a couple thousand token's worth of YAML, most of which is noise like entity IDs and YAML structure. This could be cut down significantly if you named your devices and entities cleverly (and concisely), but I think my best bet is to dig into the code and trim it down. And/or create a synthetic entities that stand for multiple entities representing a single device or device group, like e.g. one "A/C" entity that combines multiple sensor entities from all A/C units. [1] - Outside the YAML dump that goes with each message (and a preamble with current date/time), which is how the Assistant know current state of every exposed entity, there's also an extra schema exposing controls via "function calling" mechanism of OpenAI API, which is how the assistant is able to control devices at home. I assume those "intents" go there. I'll be looking into it today, because there's a bunch of interactions I could simplify if I could expose automation scripts to the assistant. |
![]() |
| Your article states this differently. The development has not been canceled fully but re focused.
“and now hopes to release a more standard headset with fewer abilities by the end of next year. |
![]() |
| That's what all these companies are peddling though. The question is - do humans actually NEED a display before their eyes for all awake time? Or even most of it? Maybe, but today I have some doubts. |
![]() |
| Given how we as a society are now having significant second thoughts as to the net utility for everybody having a display in their pocket for all awake time, I also have some doubts. |
![]() |
| Stick a slick UI that lets the end user quickly fix up just the bits it got wrong and flip through documents quickly and 80% correct can still be a massive timesaver. |
![]() |
| We're thinking about adding AI to the product and that's the path I'd like to take. View AI as an intern who can mistakes, and provide a UI where the user can review what the AI is planning to do. |
![]() |
| I'm going to copy my answer from zellyn in a thread some time ago:
https://news.ycombinator.com/item?id=40516532 |
![]() |
| Same here, but I'm hoping it takes off for other people.
I get requests all the time from colleagues to have discussions via telephone instead of chat because they are bad at typing. |
![]() |
| This is a common problem across all fields. A classic example is that you don't change SAP to suit your particular business, but instead you change your business to suit SAP. |
![]() |
| I was trying to be nice but yeah, "i'm smarter" and "i crush your skull" are not witty. There are nice twists of phrase in there though. The kid has potential! |
I think one of the results of this is that the concept of AI itself increasingly becomes more muddled until it becomes indistinguishable from a word like “technology” and therefore useless for describing a particular phenomenon. You can already see this with the usage of “AGI” and “super intelligence” which from the definitions I’ve been reading, are not the same thing at all. AGI is/was supposed to be about achieving results of the average human being, not about a sci-fi AI god, and yet it seems like everyone is using them interchangeably. It’s very sloppy thinking.
Instead I think the term AI is going to slowly become less marketing trendy, and will fade out over time, as all trendy marketing terms do. What will be left are actually useful enhancements to specific use cases - most of which will probably be referred to by a word other than AI.