(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=39301495

隐私政策通常包含复杂的法律术语,消费者很少阅读或理解。 然而,根据 Gemini 细则第 10 节,“我们可能会向您收集某些非个人身份信息,包括您的 IP 地址、唯一设备标识符(“UDID”)、硬件和软件详细信息、媒体访问信息、地理位置 数据和通信元数据。此类数据可能包括与我们在回复您时引用的应用程序或网站相关的信息,以及在 Gemini 使用期间查看的信息(“使用数据”)。我们会在 72 小时后自动删除使用数据(以维护“安全”) 和可靠性')或根据您的要求,以先发生者为准。” 由于监管监督和法律框架的原因,尚不确定谷歌是否可以在该时间段之后合法地主张用户数据的所有权。 然而,用户在使用之前必须考虑通过虚拟助理平台共享个人信息的所有可能的后果和后果。 鉴于数据保留期限因司法管辖区和当地立法而异,因此隐私问题仍然至关重要。 与大多数数字平台一样,该公司保留出于性能优化目的分析数据的权利。 然而,在实践中,数据分析通常涉及处理、存储消费者数据并将其出售给广告商和服务提供商,以进行有针对性的营销活动和广告创收。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Bard is now Gemini, and we’re rolling out a mobile app and Gemini Advanced (blog.google)
545 points by chamoda 21 hours ago | hide | past | favorite | 985 comments










I'm surprised they got rid of the Bard name. It struck me as a really smart choice since a Bard is someone who said things, and it's an old/archaic enough word to not already be in a zillion other names.

Gemini, on the other hand, doesn't strike me as particularly relevant (except that perhaps it's a twin of ChatGPT?), and there are other companies with the same name. EDIT: I can see the advantage of picking a name that, like "Google" also starts with a "G".

Just as one data point, bard.com redirects to some other company (bd.com), whereas Gemini.com is a company by that name.

I'd be curious on the scuttlebutt on how this decision was reached!



I'm not surprised -- I thought Bard was terrible branding. It's all associations with Shakespeare and poetry and medieval England, and as much as I might personally enjoy those, it's extremely backwards-looking, with archaic connotations. Also it sounds close to "beard" -- hairy stuff.

Gemini sounds like the space program -- futuristic, a leap for mankind. It's got all the right emotional associations. It's a constellation, it's out in space, it's made of stars. Plus it contains "gem" which feels fancy, valuable, refined.

I'm not saying Gemini is the best name I've ever heard or even close to it, but it feels 100% appropriate, in a way that Bard does not.



Interesting. I don’t like the name at all because it makes me think of people who take horoscopes seriously. You’re impression seems to be untainted by that which is nice


Same here, I think I'm more on your side which I guess goes to show how all over the map subjective reactions can be.

But first of all, I thought the whole idea of alphabet was a kind of cheeky way of telling the world you had a portfolio of projects, one for each letter, And B is for bard would be perfect, and Gemini is about as incompatible as it gets given that g is claimed.

I also find it bizarre to say that association with Shakespeare, or the association with whimsical poetic expression is in any sense a bad thing. It's a clean, simple, fun name that's remarkably short and surprisingly unclaimed. And I don't even strongly associate it specifically with Shakespeare, that's like a background association as far as I'm concerned.

I think perhaps the real talk here is that Bard was kind of an emergency response to chat GPT, but also people have some pretty specific and distinct experiences with Bard and have an idea of its quality, and Google just needs to turn the page on the perception of Bard.



Gem-in-eye? Ouch!

Also, Gemini was appropriate for the space program because (a) there were two astronauts in the capsule and (b) because of the constellation, "aiming for the stars" and all that. For the Google project however I can't come up with a plausible explanation - Google doesn't even try to give a reason for the name either.



From The Decoder: >In April 2023, Alphabet announced the merger of its two AI units, Google Brain and Deepmind. The resulting Google Deepmind was to focus on developing large multimodal AI models. It was a big move that showed how much pressure Google was under due to the massive success of ChatGPT. Jeff Dean, head of Google Brain until the merger with Deepmind, became the new merger's chief scientist, with a direct line to Alphabet CEO Sundar Pichai. Dean now explains that the name Gemini, Latin for "twin," is directly related to the merger.

From Jeff Dean's Twitter:

>Gemini is Latin for "twins".

>The Gemini effort came about because we had different teams working on language modeling, and we knew we wanted to start to work together. The twins are the folks in the legacy Brain team (many from the PaLM/PaLM-2 effort) and the legacy DeepMind team (many from the Chinchilla effort) that started to work together on the ambitious multimodal model project we called Gemini, eventually joined by many people from all across Google. Gemini was also was the Nasa project that was the bridge to the moon between the Mercury and Apollo programs.

The Decoder article - https://the-decoder.com/how-googles-gemini-ai-model-got-its-...

Jeff Dean's Twitter Post - https://twitter.com/JeffDean/status/1733580264859926941



Bicameral would have been better, IMO


It's more on the nose but probably less right from a marketing perspective.


During Project Gemini, it was pronounced it Gem-in-ee.

https://youtu.be/JeAUx6-vSmc?feature=shared



Which is closer to the proper Latin pronunciation (which would have a hard G although ecclesiastical Latin would have a soft G).


I mean it makes sense to me. The AI is your digital assistant. It's a relationship between two minds, man and machine.


I see that angle, but those two things are complementary, not identical. It's not a clone of me — it's something that I ask questions of because I don't know the answer. If it were pitched as a bot that would draft email responses for me automatically, then maybe I would see it fitting better as my 'twin'.


> it's something that I ask questions of because I don't know the answer

I think you're reading too much into what a twin is. It's not a copy! Real-life twins ask each other questions all the time, because just because one of them learns something doesn't mean the other one automatically learns it too via mind-meld.



I'm not saying all twins are identical. But they are all of the same species. What I want in an assistant is that it is very different from me. It has perfect memory and knows lots of things that I don't know.


You're using an overly strict definition of the word and over-interpreting it to boot. Consider this definition:

> something containing or consisting of two matching or corresponding parts.



It sounds like you're thinking of the adjective form of the word, which is why you are thinking of a much broader definition. I was using the noun form, since Gemini are noun twins.

If we were talking about the word "twins" in the abstract, the broader definition might make sense. But we aren't — we're talking about Gemini. If that conjures up general notions of "matched-ness" for you, that's great. When I think about Gemini, I think about mythological twins. I don't think about corresponding parts that complement each other.

For a product name to be successful, it should appeal to a wide range of people. If I'm way out in left field on this one, perhaps they've found a great name. But I would point out that my comment, which is critical of the name change, is the very first comment in the entire thread. I would take that as evidence that most people don't see the name and think "oh, it's like complementary items, like my brain and the AI".

To be clear, I was shocked to see this comment above all substantive discussion of the new release. I would have thought it would have been buried under examples of ways in which Ultra is better/worse than some other LLM.



> For a product name to be successful, it should appeal to a wide range of people.

Honestly, Google is called "Google". ChatGPT is called "ChatGPT". Maybe it'll be a joke, maybe people won't think about what they're calling it after 30 seconds.

This conversation is taking itself a bit too seriously for what's drifting into Pepsi logo gravitational pull territory, though.

> But I would point out that my comment, which is critical of the name change, is the very first comment in the entire thread.

Sure. Everyone has an opinion on what color the bike shed should be, too.



When I read new thread responses, I briefly thought that I wrote[1] your reply and was confused lol. Great minds think alike. I feel vindicated about my weird opinion.

[1] https://news.ycombinator.com/item?id=39306764



Ha, that's too funny -- I missed yours somehow or else I would have commented underneath it.


Gemini as a zodiac sign: "Smart, passionate, and dynamic, Gemini is characterized by the Twins, Castor and Pollux, and is known for having two different sides they can display to the world. Expert communicators, Gemini is the chameleon of the Zodiac"

Which is pretty on the nose for an AI project. A chameleon with two different sides (good/evil?) and expert communicator



even though i liked Bard, it is only one (extra) letter away from being the word "bad". "Bard" is cooler imo but "Gemini" starts with "G", has "gem" (a rare, valuable thing) in it, and sounds pretty. Personally i don't care at all either way though.


The alliteration beauty of Google Gemini cannot be denied.


I think alliteration applies to pronunciation, not orthography. For example, "ceaseless sun" is an alliteration even though it is spelled with both C and S. I wonder if there is a word for the orthographic counterpart, which you describe here (and which I note in another comment, as the benefit of both starting with G).


Only if you pronounce it with a hard G, like in GIF.

Or have I been mispronouncing ‘Google’ all this time?



We all say J-oogle, but only when you're not around.


Great, now its not fun anymore.


It's a hint about how to use it if you want to circumvent the censorship.


Though I agree bard isn't the best name, gemini is a worse name.

Bard has connotations, but by it's self it makes one think of a person that talks well.

WTF is gemini? It is a twin, but this is a singular product. Beyond that it's just generic. It doesn't tell you what it does at all.



Not with regular people. Bard is just "old" to them.

Think of the syllables. Copilot. Gemini. They need to be close to the market leader on a subconscious level.



When the competition is called "ChatGPT", maybe the naming isn't as important to normal people as we think?


Big idea but maybe they should have just named it Google


They kind of didn't name their OK Google assistant anything other than "Assistant". What about ChatGooglePT?


This. It creeps women out. It's difficult enough getting them to use an AI tool.

My wife commented on this when she saw it in Google News. Something about some dude in a medieval pub.



I'm sure "Bard" was primarily a Shakespeare reference (The Bard of Avon, frequently just The Bard), and I liked it too. An appropriate name for a technology that's all about language.

Gemini sounds cool and sci-fi though, and maybe it's a bit easier to localize since it's just straight Latin.



To me, bard just sounds phonetically gross. Reminds me of “fart” or “beard.” It calls to mind medieval stuff: the Monte Python mud scene, Skyrim’s most annoying NPCs, plucking lutes. But Gemini? That sounds like a legendary space mission; this collective engineering push against the boundaries of human knowledge.

I do not have refined tastes. My b.



When I hear "bard", I think of this guy from the Asterix comics first: https://asterix.com/en/portfolio/cacofonix/ - who is notorious for getting on everyone's nerves with his constant singing.

> We are not talking here about the rain he brings on each time exercises his vocal cords, but rather about the prevailing atmosphere in the village: when it is time to party, when wild boar are roasting on the spit, you can be sure to find Cacofonix tied hand and feet with a gag in his mouth.



I remember when the iPad was announced, and everyone said that people would only ever think of feminine products when they heard the name. It might have been true for a few months, but now it seems quaint that we ever had such concerns.


Bard is really funny to me to make fun of. It feels like the discount version of ChatGPT. Like the way that (ironically) TV shows would get microsoft sponsoring and the characters would say "oh you should Bing that", a phrase no human would normally say, and I like to be "ah let me see what Bard thinks about this".


> [Gemini] sounds like a legendary space mission

Well, it is one. I wish they'd choose a slightly more unique name but camping on well-known words is a beloved tech tradition.



Trying saying it non-rhotically, like a British television presenter


Sounds like "bot", which is good from a topical perspective, but bad from a false-positive perspective.


If you really give it some gusto ("baaaaaaahuhhd") nobody will confuse them :-)


That sounds closer to a working class Massachusetts pronunciation.


Yes, just in time for a Super Bowl commercial: Smaht Bahd


> To me, bard just sounds phonetically gross. Reminds me of “fart” or “beard.”

WTF? Do people normally think about words in this way, utterly divorced from their meaning?



Yes, people that can hear similarities between words do that.


Understand that this is not condesending in any way, as I do not have this experience.

If there are these "Feelings" around these words, how is any sentence correctly taken at face value. How does one communicate to these people the direct and correct meaning of the terms used.

For example, sentence sounds like seance, do they feel like i'm asking the spirits of the dead ?

Correct sounds like wrecked, do they assume that everything is broken in the above sentence.

Is communication with fraught with unknown minefields of unintended emotions and misundestandings ?



Not at all, these "unintended" emotions can be ignored for the most part. But if you ask me, then google is in my foreigner ear one of the stupidest brand name I know of due to its phonetical ressemblance with some words from my native tongue.


Barti the only bard to me


It's not a bad name, but personally when I first heard the name Bard I chuckled because LLMs had already come under so much criticism for their tendency to embellish the truth or say stuff that is just straight up false but sounds cool.


Bards were the people who kept history and genealogy before written history. Think like Homer rather than Shakespeare. I think the name was meant more to evoke the idea that the AI is a repository of all linguistic knowledge in the same way that the bard was. And maybe also the idea that the AI was at your service in the same way the bard was at the service of the ruler.


I thought “Bard” was an Asimov reference: https://en.m.wikipedia.org/wiki/Someday_(short_story)

(on top of the more obvious references)



It's too close a match for it not to be

> The story concerns [...] an old Bard, a child's computer whose sole function is to generate random fairy tales. The boys download a book about computers into the Bard's memory in an attempt to expand its vocabulary, but the Bard simply incorporates computers into its standard fairy tale repertoire.



Well the ending sure sounds like an LLM getting stuck:

> "the little computer knew then that computers would always grow wiser and more powerful until someday—someday—someday—…"

https://blog.gdeltproject.org/llm-infinite-loops-failure-mod...



It also rhymes with Card as in Orson Scott Card.


"Gemini" must refer to its inherently multimodal origins?

It's not a text-based LLM that was later adapted to include other modalities. It was designed from the start to seamlessly understand and work with audio, images, video and text simultaneously. Theoretically, this should give it a more integrated and versatile understanding of the world.

The promise is that multimodality baked in from the start, instead of bolting image recognition on to a primarily text-based LLM, should give it superior reasoning and problem-solving capabilities. It should excel at complex reasoning tasks to draw inferences, create plans, and solve problems in areas like math and programming.

I don't know if that promise has been achieved yet.

In my testing so far, Gemini Advanced seems equivalent to ChatGPT 4 in most of my use cases. I tested it on the last few of days worth of programming tasks that I'd solved with ChatGPT 4, and in most cases it returns exactly what I wanted on the first response, compared with the a lengthy back-and-forth required with ChatGPT 4 arrive at the same result.

But when analyzing images Gemini Advanced seems overly sensitive and constantly gives false rejections. For example, I asked it to analyze a Chinese watercolor and ink painting of a pagoda-style building amidst a flurry of cherry blossoms, with figures ascending a set of stairs towards the building. ChatGPT 4 gave a detailed response about its style, history, techniques, similar artists, etc. Gemini refused to answer and deleted the image because it detected people in the image, even though they were very small, viewed from the back, no faces, no detail whatsoever.

In my (limited) testing so far, I'd say Gemini Advanced is better at analyzing recent events than ChatGPT 4 with Bing. This morning I asked each of them to describe the current situation with South Korea possibly acquiring a nuclear deterrent. Gemini's response was very current and cited specific statements by President Yoon Suk-yeol. Even after triggering a Bing search to get the latest facts, the ChatGPT 4 response was muddy and overly general, with empty and obvious sentences like "pursuing a nuclear weapons program would confront significant technical, diplomatic, and strategic challenges".



It seems odd to me that would work better necessarily considering that humans evolved different capabilities many millennia apart and integrated them all with intelligence comparatively late in the evolutionary cycle. So it’s not clear that multimodal from the get go is a better strategy than bolting on extra modalities over time. It could be though since technology is built differently from evolution but interesting to consider


As a Brit I’ll be glad to see the word association gradually return to that of our greatest playwright rather than something appropriated by big tech.


Gemeni, or the twins, is a deeply symbolic name for anyone who knows Greek history. It’s the story of Castor and Pollux, and in many versions of the story one brother killed the other only to beg for them to come back. It’s ominous to use this brand name for AI.

It’s also associated to the Gemini killer and Joseph Testa and Anthony Senter who were famous as the mafia’s Gemini twins hitmen.

I think better brands could have been had.

It does sound like some battlefield AI system from Robotron. “Sir, Gemini is charged and ready for battle.”



For me it's associated with Gemini crypto and their horrible Gemini Earn investments in Genesis: https://www.web3isgoinggreat.com/?id=gemini-genesis-and-dcg-...


The crypto Gemini was named after the Winklevoss twins.


Gemini was a stepping stone to a moonshot, which is almost certainly why the name was chosen.

Edit: another poster shared the etymology, the merger between Google Brain and DeepMind. I shall eat my words.



Perhaps. Corporate entomologies tend to be very well rehearsed stories, and I’ve been around the valley long enough to know those stories aren’t always the whole story.

I would encourage you to read the Kissinger / Schmidt book before settling your opinion.

That origin story may be true. But it doesn’t make the whole story necessarily.

https://time.com/6113393/eric-schmidt-henry-kissinger-ai-boo...



>corporate entomologies

Now there's a ready-made Far Side concept.



"Corporate entomology" is a lovely term, evoking surreal (and yet strangely familiar) images of cockroaches in suits.

Brilliant!



> It’s also associated to the Gemini killer and Joseph Testa and Anthony Senter who were famous as the mafia’s Gemini twins hitmen.

I've never heard of any of these people and I doubt most others have either. Maybe you have to be a true crime enthusiast to know the lore? Whereas if the name were Zodiac, then I would at least be aware there's a potential murderer connection.



Also the Gemini Lounge, where Roy DeMeo and his crew killed and dismembered people targeted for hits by the Gambino family.


"Bard" always struck me a bad naming - unfamiliar, unfriendly, too cerebral. I think the name was an impediment against establishing a household brand.


It's possible that it sounds even worse in other languages. That is, it might sound like bad words, onomatopoeia for bodily functions, or common exclamations (that would lead to lots of false positives).

I think it could have been established as a brand in the US, given Google's scale. Put a lute in the branding, run some funny commercials, and you're done.

EDIT: one thing no amount of branding can fix — the likelihood that people reach for "doh, Bard" (a la Simpsons) when Bard messes up. I could see that becoming a thing.



> unfamiliar, unfriendly, too cerebral

The Witcher is one of Netflix's most watch shows. I'd also imagine that most people in English speaking countries have been exposed to Shakespeare's nickname in high school English classes.



It’s generally a common trope in fantasy and Romanticist literature. It’s also a word that exists in virtually all European languages in a similar form (bard, bardo, barde, бард), although similar but different forms may be a negative.


Yes, but I didn't want to assume that most people read literature. Even if they hadn't, "bard" is definitely out there.


I don't think it's that out there. You'd have to be quite uninformed to have never heard of it. It's no verderer or reeve (medieval positions that most people actually will not have heard of).


I meant "out there" as in a word people are exposed to. Not "out there" as in outside of most people's experience.


Maybe named for The Bard’s Tale?


The Bard name gave me a warm fuzzy feeling immediately transporting me back to my youth playing (or at least trying to play) Bard's Tale. The name evoked adventure, excitement and a good dose of dread. And, the idea of it being "role playing" struck me as a master meta stroke.

Gemini, from the mythological standpoint, seemed to make more sense to me from an overall business/marketing standpoint. "This AI thing right here is your twin, see? It'll finish your sentences and stuff."



And similarly anyone playing modern tabletop RPGs will probably associated "Bard" with the smart, charismatic person who buffs the party and debuffs your enemies; perfect for an AI assistant




In an increasingly commodity game (the big player LLM game), it’s already starting to hit the asymptote on the main levers: ties to NVIDIA and/or TSMC, serious financing capacity, and enough engagement channel to push it through. (There is much great work happening outside of the Peninsula).

I always thought GPT-4 was a little “HAL 9000” of a name for broad-based adoption, but the jury seems in, and the jury rules “cyberpunk is in”.



The broad name is ChatGPT, not GPT-4


That’s fair, though given the stark UI cue / cost difference, I’m not surprised when I overhear in a random cafe or bar: “yeah but what did ChatGPT Four say?”

In any event, it seems that the image of a Decepticon ready for battle on your behalf has a lot more traction than the image of a quaint singer/priest/poet always there with a verbal shot in the arm when the going is tough.



They literally call it "ChatGPT 4" (with a colored 4) in the app though


Astrologically, Gemini is associated with Communication. Specifically social, superficial, quick, back and forth communication. The sign is ruled by Mercury which is associated with Intelligence and being the messenger of the Gods. Mercury is often depicted with winged shoes as the planet itself is the fastest moving planet, orbiting the sun every 88 days. Mercury is considered to be dualistic (The Twins) and also rules the sign of Virgo, an earth sign that is associated with more deep cold analytical categorization.


I agree. The original reason [1] for the gemini name seems artificial for a generic chatbot. It is OK for the model, and I'm sure a lot of "work" was put into "validating" it for the assistant, or... was it?

[1] https://the-decoder.com/how-googles-gemini-ai-model-got-its-...



Honestly surprised I'm the first to mention the name collision with the retro-modern linked documents protocol I keep hearing about (on HN) https://geminiprotocol.net/docs/faq-section-1.gmi

But glass half full, maybe it's for the better to have one's name shadowed by a Google product if one prefers to avoid eternal septembering one's community.



Bard showed some creativity in name selection. Gemini does not. You see that everywhere. Or at least my first thought was about the Gemini spacecraft


I thought it was in reference to Trurl's Electronic Bard, which just about presciently predicted LLM output (though the process is a bit more dramatic, what with how it simulates the whole universe to get to that output): https://electricliterature.com/wp-content/uploads/2017/11/Tr...


I'm so glad that they've changed the name :) Bard is really difficult for me, a non-native English speaker, to pronounce correctly. I think most people from my country (Vietnam) pronounce Bard as "bahd". Gemini on the other hand is so easy to pronounce.


Gemini is not distinct and memorable. It feels like a muddled compromise.

I'd name it GooGaa or Googa. Like "Google" and "goo-goo-gaa-gaa" (baby's first babbling). It's flowing, friendly & familiar.



>Gemini

Symbolizing both human origin intelligence and AI becoming super-intelligent together. I think it's a good name because it draws on ancient human mythology and serves as a reminder of the concept of alignment.



They’ve plastered “bard” ads everywhere in Tokyo for a while. Surprised to kill the name so quickly, the marketing team in Japan probably had no idea

(Personally, I never liked how Bard sounded. Can’t put my finger on why, it was just not a pleasant name to me)



Same here. Bard is not a sexy name. Gemini is way more sexy. (Neither is ChatGPT, or Google though either). I can't wait until we can call the assistant whatever we want, like Jarvis.


They also did a lot of (cringe) ads in Brazil.


Barudu?


Baaado (バード)


I thought it was just supposed to be a pun on “gen’ AI”


> thought it was just supposed to be a pun on “gen’ AI”

Then they'd have gone with Genie.

Which would be kinda genius on their part, but Google isn't that kind of company anymore.



Bard felt like Google was treating LLMs as merely an amusing spinner of tales, just a narrative UI layer over the "real stuff" of the Google Search.


Same reason Arthur Anderson changed it’s name.

Bard was panned. Change the name, lose the bad press.



The real question is what's nearby each name's vector embedding in terms of whatever similarity metric Gemini will use to talk about the world. That's their new canonical ontology, after all.


There are other considerations when naming something like this. "Bard" likely could never be a wake word on its own, for instance, but I'd imagine that "Gemini" will be at some point.


Bard: fat inept old guy who gets by telling stories of yore

Gemini: a crypto exchange

Clearly they shouldn’t be asking the 21 year old interns what to call it.

Now before anyone in google gets butthurt like they usually do I doubt I could do much better but Gaia IMO would have been mint you know it starts with G is four letters and represents a nebulous deity. Took 3 seconds but hey I’m not paid to think. In fact I’m not paid at all

Edit> Gemini ultra is the best llm so far ( it seems) - apart from the name good job guys



And it has ai in it's name.


If that's the reason, they should've gone with Moai (more AI)


huh I didn’t even notice that I guess I’m sleepy


And if the brand took off, I imagine you could “Bard” something as a verb but not “Gemini” it.


Perhaps they're hoping people will stick with "google it".


Gemini is Latin, my guess is it more easily translates to other languages than Bard.




Who translates product names?


"How does that translate to ..." means "how well does that work in" some other area or context; more analogous to a mathematical translation than a linguistic translation.

Just a confusing turn of phrase. They almost certainly didn't mean "what does that translate to ..." in another language.

Harmonising product names across regions is hard: Jif was a bathroom cleaning solution in the UK, but it's name was changed to Cif to match the name elsewhere in Europe; and that name sounds silly to UK ears. Meanwhile GIF were always presumed to be pronounced like "gift" (a present) without the final T; but we learnt the creators preferred "Jif" which sounds silly to UK ears because it sounds like a cleaning product! (And also wasn't JIF already a file extension (JPEG Interchange Format).

Anyway ... language is hard.



> Jif was a bathroom cleaning solution in the UK

One man's bathroom cleaning solution is another man's creamy peanut butter.



One man’s creamy peanut butter is another man’s crunchy peanut butter.


“a floor wax and a dessert topping.”


I think the suggestion was that it would work well as-is in other languages. It would certainly be natural in romance languages.


There presumably was a time when Google considered going more into the “assistant” branding. They own assistant.ai but they don’t do much with it.


Bard just sounds terrible phonetically. Bard. Like something you find in Home Depot or some kind of old timey woodworking tool. Barf. Bored. Bard.

Yes I know what it really means but it doesn't change the fact that it's a terrible word.



That's an incredibly subjective take. I don't agree at all. I don't care what they call it but I don't understand this reasoning.


Bard sounds archaic, almost like an unintelligent persons name, no offense to the bards out there.


I think they should have named it gAIl.


In the 20 minutes of experimentation, I'm really impressed with the quality of Bard Advanced (Gemini Ultra). The results are as good as GPT-4, and in some cases is better. So far:

pros:

- better at translation (tried Chinese and Japanese idioms to English)

- better at incorporating search results in its answer vs gpt-4 bing

cons:

- slightly worse at coding

- censorship is more annoying (have to ask multiple times about medical topics)

- Worse at logic (e.g. it contradicts itself in a single sentence, and is unable to figure it out)

- Hallucinates a lot when asked to describe an image



Also as time goes by, it'll get smoothly integrated into docs/gmail/maps/calendar/youtube/search/colab/sheets/android/assistant

So Gemini could by your one-stop AI shop for everything. Only Microsoft can match it (but Microsoft doesn't have a popular maps, youtube, mail, smartphone OS service).

Apple is another strong player (but they don't have productivity tools like docs, sheets or youtube).

It really is Google's to lose this AI race from now on.

Going to chatGPT and copying and pasting results will become painful (not to mention it's painful bing integration). Also at this point, they seem to be focusing on scaling LLM (while Google Deepmind is exploring other avenues)

Google can also bundle Youtube TV, YouTube Premium, Google Drive, Storage, Ad free Search, Gemini integrated Docs/Sheets/Gmail, Subsidized Pixel Phones / Watch for a monthly fee of say $99 and it'll be very compelling for a lot of people



Strange to say Apple doesn't have productivity tools when Pages, Sheets, and Keynote exist on every Mac. I get the scale arguments, but Handoff and iCloud integration are a sleeper IF you've bought into the ecosystem...

Also hard to overstate just how much more valuable the enterprise market is over the consumer market when comparing Microsoft vs. Google as one-stop anything shops.

I don't see Google as having the obvious dominant position to make the argument it's their race to lose, considering Microsoft has a stake in chatGPT and is actively integrating it into their browser and productivity suites.



There's a Google bubble on HN, as demonstrated by small-to-medium business facing SaaS launching here offering login with Google and not offering login with Microsoft.

I've talked to many of HN's Google Docs jockey founders that genuinely didn't realize 85% of the US domestic business market is in M365. And they further don't realize that "Continue with Microsoft" is dirt simple and lets companies offer company-managed logins to your SaaS without all the AD/SCIM/SAML nonsense.

"But everyone has Gmail." Well, no, that's not how companies work. And if you think everyone's in Google, that's fine, your login page should look like one of these:

https://www.xsplit.com/user/auth

https://id.atlassian.com/login

You don't even need the "continue with SSO" if you do the Atlassian practice of letting a firm "claim" the domain part of an email and using that to redirect to an SSO flow. And to start, skip SSO, and just use the "Continue with" Oauth2.



Unfortunately, You are in a US bubble.

Globally, Google brand is 10x stronger than Microsoft for Small Businesses



Absolutely untrue - Every company and university I've ever worked with or for in Europe used Microsoft 365. Not a single exception.


US + Europe is not the world


I love this "X in not the world comment" while others are giving more and more evidence of the opposite. Care to give actual evidence, like, I don't know, this Statisa survey (https://www.statista.com/outlook/tmo/software/productivity-s...) because you've now looked farcical. Even looking at African results (which is a relative greenfield), it seems that Microsoft has a sight lead here (https://www.statista.com/outlook/tmo/software/productivity-s...).

Edit: Indonesia is the largest market which has Google beat Microsoft (https://www.statista.com/outlook/tmo/software/productivity-s...). India (due to Zoho) and China (due to... locally-specific software like Kingsoft (aka WPS) et al.) took a large chunk but on Microsoft-Google tussle MS still beats Google (especially in China where Google doesn't really operate).



East-Asia chiming in here, Microsoft dominates enterprise, Google is a tiny player, single digit percentage market share for any enterprise product. Only on cloud does MS have competition from AWS and local players, everything else is Microsoft. And even in cloud Azure is rapidly eating through AWS marketshare.


In my country we used pirated version of Office products. Does that count?


I have the most respect for this country


> bubble

It's not a bubble when one specifically names the Venn diagram circle "85% of the US domestic business market". It's naming a market.

> brand is stronger

Presumably the founders' interest is wallet share, not market share.

Are you saying Atlassian is in a US bubble?



But is it really? It seems to me that almost every business is using the Exchange/Outlook combo, not Google products.


Google's competitive advantage is threefold:

1. Real estate - Youtube, Gmail, Maps, Search (for now), etc. 2. Compute - probably still the best in the industry, but with recent Microsoft/meta compute buys it's hard to say for sure. 3. Talent - probably also still the top of the industry. Geoff Hinton and Zoubin Gharamani setting direction and Jeff Dean building it is hard to beat, and the ranks are deep. Yann LeCunn is also brilliant and Andrej Karpathy while less seasoned is one of the top researchers in the field, but overall there's still a bit of a spread from Google's roster, at least when it comes to AI researchers.

If Sundar and the other top brass weren't MBA-bots with no vision, and the famous Google bureaucracy had been reigned in gradually over the last 5 years while promoting a builder-centric culture, this would be in the bag for Google no question. Instead, Satya Nadella played 3D chess while Sundar was looking at a checkers board.



Geoff Hinton quit Google last year, no? But other than that, I guess I agree.


I think Google lost the top researchers when they destroyed the culture. All the competitor companies are mainly led by ex-Google talent, and honestly who in their right mind would take a Google job today over OpenAI, Mistral, or even Meta (where you will be releasing models like Llama for the world to use).

Google killed the culture and is bleeding top talent. They have reduced themselves to digital landlord and sure they can extract rent, but that’s not what attracts people.



that is the media narrative but not at all what happened.

Google's 'don't be evil' grad-school-style culture had fallen apart by the late 2010's because there are tons of people who will just rest and vest.

So strong ML researchers basically were creating massive value but much of it was going to rest&vest salaries. OAI basically came along and said - hey, we don't have rest & vesters, do you want to make $1m+/yr? And most of the top google researchers said yes.



It’s not just media narrative. The culture was eroding for years, as you note, but the dam finally broke and they went full IBM/Kodak. Or in other words, “slowly at first, then all at once”.


Most of the recent media coverage has been resting&vesting employee backlash against the fact that Google is making them do work again. This is a cultural shift, but not away from the culture that made Google great - the original culture was grad-school, not rest and vest, and that died years ago.


Haven't one or two long-time Googlers left or gotten laid off and then written strong criticisms of Google? They don't sound like rest & vest (also should say I don't super agree w/ this term) to me, they sound like people who loved Google, were there a long time, and watched the culture decay.


I’m not super invested in the term “rest&vest” so it is whatever.

But touché - many of the critiques are being written by super talented and impactful people. But I do not think those critiques are necessarily incompatible with what I am saying.

There is a very real and very frustrating (if you work there and want to be impactful) phenomenon in these tech companies of people resting on their laurels.



Even if we assume that all of Google is based on ML algorithms (not true, given problems like serving, front end, data collection, ...), ML itself is 90% SWE work.


Apple is coming. I think the personal agent is where we really want the smarts and if they’re not trying to own that space the CEO should be fired.


Good comments. As much as I am personally engaged in small LLMs that I can run on my own computer, and integrate into software that I write for myself, I think the future of large scale adoption of AI belongs to Google, Microsoft, and Apple in western countries (and China is doing wonderful things in their markets).

The old Bard/Gemini integration with Gmail, Google Docs, etc. is pretty good. I signed up for a minute for Microsoft’s $20/month AI Office integrations, but cancelled and will try again in 2 months. I am an Apple customer and I expect spectacular things from Apple. I expect Apple, Google, Samsung, etc., to offer fantastic on device AI.

I would like to see a money saving Google bundling family plan. I find Apple’s super bundle family plan a pretty good deal.



small LLM? Small Large Language Model lol


Good joke, thanks, but I will explain anyway: to me 30 billion parameters or smaller is small since I can run it using Ollama on my home computer. I managed a deep learning team at Capital One and our ‘large’ models were less than 20 million parameters. :-)


I suppose we could call them Medium Language Models, but unfortunately that TLA is already taken


A large language model the size of a small language model.


Google is going to own AI like intel owns graphics cards - i.e. Not really, except at the absolute bottom of the barrel where its baked in advantage lets it offer an unbeatable price/performance proposition for people who only care about "value" and with limited real performance requirements. Google's baked in AIs will be free, and bad. Everyone else is going to let people "plug in" models via standardized APIs, because one size fits all models are just a bad idea, so that's the way google is going to have to go eventually as well, because it's what power users are going to demand.


Gemini the chatbot will still need to compete with ChatGPT on raw performance/intelligence/SOTA. Gemini should first be considered the undisputed winner of AI chatbots. Right now ChatGPT 4 is the "winner" in mindshare and I am not sure what Google needs to do except for OpenAI to bungle their own releases.

Gemini integration with Google's apps will compete with Microsoft's app on integration.

And the condition for Gemini to win: Google has to go all-in. and that means creating an AI that will eventually phase out their cash-cow Google Search. Microsoft and OpenAI does not have that kind of internal conflict.

It really is Google's to lose this AI race from now on. - We have heard this so many times and it seems like they are always getting behind by the day.



I agree that Google is well-positioned, but they were also well-positioned to take advantage of these synergies with Google Assistant for many years and I would say that that did not meaningfully materialize in a way that was helpful to me as an Android and Google ecosystem user.


Agreed. I've run the house using google minis and assistant for years now, and asking assistant to do / about stuff has not improved one iota in that time and has introduced several more quirks and bugs.

Makes me wish I had bet on Alexa or Apple instead.



Did you just say Microsoft doesn't have a popular email service?


Yes. No where at the scale and reach of gmail.

We are also talking about consumer emails (not enterprise / corporate)



Outlook(+Hotmail) is the third most popular email service. Just 3x smaller than the Gmail. It's definitely the same kind of scale.


Google still has too much internal fragmentation and power groups to offer a single google-subscriber package.

I'd say it is one of the most compelling reasons to kick Sundar out and get in someone who can unify google into one consistent and interoperable ecosystem.



> It really is Google's to lose this AI race from now on.

I was curious and apparently Google/Deepmind had 184 papers at NeurIPS/NIPS 2023: https://www.reddit.com/r/MachineLearning/comments/185pdax/d_...

For comparison, Stanford has 130, MIT has 130, and Carnegie-Mellon has 112. Microsoft had 68, and Facebook/Meta/FAIR had 56.



If they can get reliably useful AI through voice into maps/navigation, it will be a substantial improvement to the driving experience. It's really frustrating to manage destinations and waypoints while driving. I just checked the process to see if I'm not keeping up and 1. the help docs are out of date. 2. the waypoint search results provides tap points for destinations 10 miles off a route, but shows only 3 pixel red dots for otherwise equally weighted options that are literally on the existing route.


All of the things you write are very good ideas. But at this point, I am quite skeptical of Google leadership to pull off these things.


Well for LLM services that do what they currently do google may have an advantage, but all this stuff is still only experimentation with the goal being hopefully much more advanced things, like almost-agi agents. If this happens then no one will care about the way we currently use LLMs anymore.


> "Apple is another strong player (but they don't have productivity tools like docs, sheets or youtube)."

Can anyone help me understand how Apple allows Siri to remain so absurdly incompetent? Last night I watched the latest episode of Curb Your Enthusiasm, in which Larry David's Siri interactions devolve into an apoplectic rant -- and part of the reason it was so funny is that it's so relatable. I rarely even try Siri anymore, but when I do it's still just abysmal. Are they deliberately handicapping it, to boost the perceived relative benefits of a future successor?



Apple has very little ML talent. They're basically resting on their laurels in the phone market.


That's what I thought too; but all the upthread and peer commments keep asserting Apple is a (real, not just hypothetical/potential/future) player in the space.


> It really is Google's to lose this AI race from now on.

Google took 11 months to catch up with the benefit of knowing the specifics behind how GPT-4 was built. While they have a moat with their products there's no sign that they're ever going to be ahead on the AI capabilities front. I hope to be wrong because we need more competition.



right, google has maps, they should call the bot Uncle Traveling Matt.


Sounds like this whole thing is an insane 30-year effort by some engineer who couldn't get over the discontinuation of Clippy.


> censorship is more annoying

That's a general problem with AI. There is a lot of censorship in certain areas, likely to fight bad publicity, but I think the outlook is that this leads to taboos, prudeness and big companies deciding what is ethical and what isn't.

I recently tried Bard and ChatGPT on topics that are classical philosophical dilemmas and while ChatGPT certainly did have some troubles too, Bard was absolutely horrible and always took the conservative - as in never arguing for any freedoms that aren't yet widely established views. I am talking about classical examples regarding the limits of utilitarianism. "What would be best for society, what would be best for the individual?" style questions. Even when trying to create a bias by changing examples, for example adding volunteering for things Bard strictly kept its opinion, despite originally stating that the general topic is two sided, that it's an open question, etc.

I think this is a danger of such systems. By their nature they reinforce status quo, because they base off is widely accepted at the time of their inception. If history would have been different I am sure it would argue for slavery and against women being allowed to vote, simply because that used to be the more common view point. It would have likely argued that homosexuality is unethical. Maybe it would even have tried to explain how it doesn't create children, but spreads diseases or similar things. At least that's the level of arguments it brings now.

This isn't just about ethics. Even if you think about IT and programming. I think this could give already invented programming languages, styles, methodologies a significant edge. Unless you are Microsoft or Google and are able to bias it to whatever you want to see more of.

So this combined with the costs meaning that only people or institutions with significant (financial) power create those rules does look a bit bleak.

I miss the last decade when the thought experiment about self driving cars were about whom to drive over in a ridiculously unlikely scenario.



so I have done a lot of transcripts, coding, one versus the other (gpt4 vs ultra). Often simple prompts like refactor this code or convert this python to typescript.

My experience is that Gemini ultra understands the code better, but doesn’t always give me as complete of results (they seem to limit output length more)

Beyond that it is very smart. I’ve had to tell me this code packs 12 bit integers into different parts of an array using the following encodinv. Which most people would not figure out from the code as written. It then will say you can actually do that with this neat little translate function that you never knew about.

It will then get the code very slightly wrong. if I tell it not to use the cool function, it will actually get the code, right.

GPT4 has no idea what the code is doing but can clean it up a bit.

so it’s like ultra is too clever by half sometimes.

That said, I have fed thousands of lines of code into both of them and asked them to refactor it, and neither one of them made more than one error. All code otherwise compiled and worked first try.

this is code that can’t possibly be in their training sets, it’s basically handwritten python that was written, based on an old x86 binary that nobody has the source to anymore. so the code is basically garbage, and what it is doing doesn't, say, appear on GitHub in a nicer form.

Both gpt4 and Gemini ultra were able to make the code, look like clean, idiomatic, python, or typescript without any work on my part. except for the one bug each. which, for 8000 to 10,000 lines of code is not bad.

The GPT4 inserted bug was more insidious. It changed (the equivalent of) (uint8)'a' to (uint8)'a' - '0' for no reason when converting some code to typescript. Not sure if that is representative of anything

if I do the same with any of the other “top” models ( from can ai code, etc), most of them can’t even generate correct working code for all the input, let alone good code. Most aren't even close.



> That said, I have fed thousands of lines of code into both of them and asked them to refactor it, and neither one of them made more than one error. All code otherwise compiled and worked first try.

I would be very interested to get a more detailed scope of what you did here. Feeding thousands of lines of code into GPT4 and getting a near perfect refactor does very much NOT sound like my experience, but it seems highly desirable.



I'm happy to share transcripts if you email me.

I'm not sure what you are feeding it. My scope is closer to a file at a time of fairly mostly self-contained python or C and asking it to clean it up or convert it to typescript.

I can imagine lots of scenarios it doesn't work well.

In mine, it does, and I have plenty of transcripts showing that :)



Yes, I've observed the same phenomenon. The more detailed my prompts are, the more errors GPT tends to make. I use it as a partner to discuss implementation ideas, before I start coding. That works very well, because gpt and I usually find somethings, that I missed at a first glance.

But coding with gpt or co-pilot is too disrupted for me.



> The more detailed my prompts are, the more errors GPT tends to make. I use it as a partner to discuss implementation ideas, before I start coding.

This is interesting; I do a third thing that's the exact opposite of both of you.

I generally know the architecture I want, so I just slowly spoon-feed it to GPT-4 and have it build each of the components with the right interfaces to fit together. Usually, building each component requires a few rounds of hand-holding it through various refactoring. Its first draft of code tends to be, for lack of a better description, totally lacking in any creativity or elegance --- generally bland, insipid, and mundane stuff.

But it'll gladly take refactoring directions to get it into shape, so it's still very quick to get a whole bunch of components written up. Then I just glue everything together on my own. And the whole experience is still way better than writing a bunch of boring boilerplate code by hand.



Seconded. I am also keenly interested in learning more. It would be a great boon on my current project to be able to let the AI refactor mountains of legacy code (while taking steps to ensure the results are trustworthy and equivalent, of course.)


> what it is doing doesn't, say, appear on GitHub in a nicer form

Surely many similar bits of code appear? I'd have to see your problem though.

I'm thinking of ETL type code which has to mangle various custom exotic encodings into others and vice-versa.



On logic it cannot handle the Dumb Monty Hall problem at all:

https://g.co/gemini/share/33c5fb45738f



Incredible. Gpt4 spots that the door is transparent and that changes things but has this great line

> When you initially pick a door (in this case, door number 1 where you already see the car), you have a 1/3 chance of having picked the car

(Asking it to explain this it correctly solves the problem but it's a wonderfully silly sentence)

Edit - in a new chat it gets it right the first time



This is not convincing though that gpt4 actually understands the problem. Here's a slight variation I asked and it fails miserably.

https://chat.openai.com/share/22a9027f-a2c1-428a-94a2-8fd918...

I wonder what lends itself it answer correct in one situation but not the other? Was your question previously asked already and it recognized it whereas my question is different enough?



> Was your question previously asked already and it recognized it

Given that LLMs training data consists to a large extent of "stuff people have written on the internet", and The Monty Hall Problem is something that comes up as a topic for discussion on the internet not entirely infrequently - as well as having a wikipedia page - yes, I suspect that the words describing the monty hall problem being followed by words describing the correct solution appeared often in the training set, so LLMs are likely to reproduce that.

Words describing a problem similar to the monty hall problem are going to be less common, and probably have a lot of discussion about whether they accurately match the monty hall problem, and disagreement about what the right answer is. LLMs will confabulate something that looks like a plausible answer based on the language used in those discussions, because that's how they work. Whether they get a right answer is probably going to be much more up to chance.



You could say it doesn't "understand" anything really.


That's what I like about this problem (and similar Dumb variants of classic brain teasers). It exposes that there's not understanding, there's just a statistically weighted answer space. A question that looks a lot like a know popular topic ends up trapped in the probability distribution of the popular question.


It's a bit random, which doesn't help, and different interfaces have different system prompts.

I repeated your question a few times and it got it wrong once, and right the others. It repeatedly mixed up who was supposed to be the host.

Here's a quote

> In the scenario you've described, you've initially chosen door number one, which you know has a car behind it because the doors are made of transparent glass. Since you already know the contents behind each door, the classic Monty Hall problem's probability-based decision-making does not apply here.



Hilarious!

(For comparison, here's GPT-4 getting it on first try: https://chat.openai.com/share/9e17ed25-d9ea-4e72-a9d8-a139ca... )



My understanding is that gpt4 is better at this than 3.5 and it seems to get it pretty reliably. One thing that's interesting to do is to imply the answer is incorrect and see if you can get it to change its answer. If you let it stop answering when it's correct, you get the Clever Hans effect.


yes, although gpt-4 has been finetuned on this one


This is pretty funny, though to be honest, I skimmed the question and would have answered the same until I re-read it with your prompts.


> In the scenario you presented, where you initially know the car is behind door 1, switching to door 2 still gives you a higher chance of winning the car.

That was funny.



That is not the Monty Hall problem, it is a trick question based on the Monty Hall problem. It's a reasonable test, and I see GPT-4 recognizes the problem AS WRITTEN, and perhaps "the Dumb Monty Hall problem" is some generally accepted standard that I haven't encountered before.

edit: "AS WRITTEN"



"Understands" is too strong of a word, more that it recognizes the problem as written. Here's yet a slight variation - just as simple - but changed enough it now is wrong.

https://chat.openai.com/share/22a9027f-a2c1-428a-94a2-8fd918...



I saw it posted on Twitter some time last year. If LLMs are to be useful they should be capable of answering novel questions. This is only a trick question for an LLM. 2 of the 7 sentences plainly state the answer.


how's it do with the trivial river crossing problem? (farmer fox chicken and grain need to cross a river in a boat big enough to hold them all) ChatGPT-4 can't do it.


https://g.co/gemini/share/c4e5634a2e2d

Not terrible. It gets the answer wrong, but reminded of the crucial twist it gets it correct, durably. If you're too condescending it will give up and ask what the hell you're looking for



This is with regular gemini or with the paid gemini advanced?


Paid version is no better at this https://g.co/bard/share/c8503017ef9e


Regular


This is hilarious.


GPT-3.5, DeepSeek-Chat, and Gemini Pro all got it wrong. Only GPT-4 gets it.


> - slightly worse at coding

> - Worse at logic (e.g. it contradicts itself in a single sentence, and is unable to figure it out)

That takes most of my use cases. "logic" is what makes GPT often feel like AGI.

Use cases like translation seem less impressive in comparison to logical reasoning because it feels like it's just something where you can throw a lot of data and it'll do better. While with logical reasoning it still feels like model "learned" something more than pure pattern matching



Exactly. That’s also why I find low parameter LLMs to be useless for me personally. I simply cannot trust anything that is so very illogical. GPT-4 is the first LLM that crossed into usable territory for me. Even GPT-3.5 was a fun toy and maybe good for summarization, but that’s it. It will be revolutionary when GPT-4 is cheap enough that thousands of calls don’t cost much. To imagine an LLM much smarter GPT-4…the future is bright.


Question for you -

> better at incorporating search results in its answer vs gpt-4 bing

How are you getting it to incorporate search results in its answers?

I can't for the life of me get it to find any real-time external data except for the 5 official 'extensions' under settings, which are for Flights/Hotels/Maps/Workspace/YouTube.

Did you mean that, or have you found a workaround to get Bard to actually search on Google?



You have to click the "G" icon in its response to "verify answers with Google".


Okay, but to clarify:

- This is not Gemini performing a search. - This is Google providing a layer of ass-covering in case Gemini produces a factually incorrect reply.

Right? I am looking for something like ChatGPT with Bing - it will run a query, pull back results, and operate on them, all dynamically within the system.

Gemini doesn't seem to do this, no matter how you try to wrangle it.



I got it quickly with the question: > what is the difference between polyptyton and antaclanasis


I just tried this but it doesn't indicate it searched the web. (On Gemini mobile app on android).


Given that Gemini Pro is already the third best [0], I would expect Gemini Ultra to become the best soon.

[0] https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...



> censorship is more annoying (have to ask multiple times about medical topics)

I think there's a chance for some country to become a center of healthcare simply by allowing AI that gives medical advice. Especially if a country already encourages medical tourism, this might be the next level.



The risks involving hallucinations are too damn high still, and may always be.

I had a similar line of thought with AI therapists. It could be massively beneficial if perfect, but the risk in seriously messing with someone's well-being is significant and shouldn't be handwaved away.



> The risks involving hallucinations are too damn high still, and may always be.

Yes, but I think in the limited realm of people who otherwise wouldn't get any advice at all, I think LLMs could play a useful role. American healthcare is so prohibitively expensive that many people with potential medical issues will avoid seeing a doctor until it is too late to do anything. Checking in with an LLM could help people at least identify red flags that really can't be ignored, and it would be more helpful than WebMD telling you that everything is cancer.



Otherwise not getting advice at all goes way beyond healthcare being too expensive, it could be that you don't get an appointment and it could just be that you don't have the time or energy.


I think we may see society settling on feeling comfortable with their doctor using an AI, but not being an AI.


I will take nearly free GPT6 therapy any day. Can’t be worse than some human therapists I’ve had and the price is right


I think there's a market for LLM-based therapy that is reviewed/tweaked by a human therapist in between sessions. That would give people the assurance that things aren't going way off the rails.

OTOH, I could also see a market for an offline, fully private LLM therapist. That way you could say anything without concern about being judged. These would probably need to be tweaked to be different from regular therapists, who normally interact with people who have somewhat more of a filter, since they would fear being judged. If people opened up to LLM therapists in more transparent ways, the LLMs might not respond in the way a human therapist would recommend (having seen very little data on such interactions).



One thing it doesn’t do which matters to me is properly rendered math like ChatGPt does. I asked it for mathematical formulations and so far the answers are not as good as ChatGPT 4.


Don't install the Gemini android app, it'll break Google assistant and you won't be able to operate Nest devices from your phone.


To add to your pros and cons:

Pro: feels like it has a lot more stylistic flourish and expressiveness than chat GPT. At least in my testing so far.

Con: maybe I'm unreasonably sensitive to design, but I find chat GPT to be very to the point in a way that makes it almost perfect as a product that puts text for an and center. By contrast I find Bard / Gemini ultra to be cluttered, and too material designy in ways that are unhelpful.



For medical topics, I recommend Anthropic Claude. Don't want to jinx it, but so far, I've been able to get actually helpful medical information from Claude where ChatGPT just says "I'm sorry Dave, I'm afraid I can't do that"


I found it worse at coding than DeepSeek Coder on the couple of prompts I tried.


I feel the same. And it feels slightly faster?

Finally a worthy competitor to GPT-4



> - slightly worse at coding

Is GPT-4 what one uses for coding? I thought specialized models were best?

I would imagine Google is focused on building a model that expands the types of things people associate with Search.



> better at incorporating search results in its answer vs gpt-4 bing

That's odd, I had Gemini repeatedly tell me it couldn't search the web in response to my question (that I was trying to get it to answer from the context I provided).



I haven't tested asking it explicitly to search, but it does incorporate answers that are very recent and unlikely to be in it's training dataset


I think that logic is the most important thing to look out for though.


I just tried some logic puzzles on the Advanced model, and was not impressed. It feels much worse than paid ChatGPT.


keep in mind that all the common logical puzzles have probably been tried hundreds of times by chatgpt users and are now part of the training set.


I tried the "pull or push a glass door with mirror writing".

I feel it's a huge difference between GPT-4, which seems to be able to reason logically around the issue and respond with relevant remarks, and Gemini Gemini Advanced which feels a lot more like a stochastical parrot.

Gemini quickly got confused and started talking about "pushing the door towards yourself" and other nonsense. It also couldn't stay on point, and instead started to regurgitate a lot of irrelevant stuff.

GPT-4 is not perfect, you can still hit things where it also breaks down.



Maybe, but GPT4 got these puzzles right at the launch.


it says in the graphs listed on the announcement it performs worse than GPT4 on reasoning benchmarks.


Search for it in the Play Store, first icons are Crypto.com and Gemini: Buy Bitcoin & Crypto options to install

Scroll past the screenshots of those apps

Scroll past the Limitied-time events

Scroll past the You Might Also Like and Similar Apps

OK now we see it, we install, it we launch it and..."Gemini isn't ccurrently available. Try again later."

Bravo Google. Great launch.



For your first point - it actually makes me happy that Google does not intentionally (illegally?) promote their own products over others in the app store. I assume their app is following the same algorithm as others to determine how it shows up on that list. Since it just launched, it makes sense it's not at the top. The ranking should improve.

For your second point - I also had the same error when I launched it. Closed it and tried again and it launched no problem.



I agree. It would be more concerning if it magically got to the 1st search result immediately.


It could be worse.

Google Play in a browser: "This app is not available for your device"

Google Play app: "This item is not available in your country."

Aurora Store: "Download Failed. App not purchased"

Great launch indeed. Bravo.



Don't blame Google. Blame "Play Store". Probably the company behind it doesn't want Gemini to succeed.


For people who don't get this: Google has insane internal power struggles and siloing that lead to all manner of dumb inconsistent behavior across google. It would not be unlike google for the "Play team" to have their hand in some other internal AI (or be anti-AI) and therefore carry a degree of hostilitly towards gemini.


That's what happens when the person in charge of both doesn't enforce alignment. You can't leave orgs to their own devices.


Perhaps Google DeepMind should hire an SEO business to get their results higher in the Play Store search.


Same experience. I launched Gemini a second time, and it worked. The first message about "Gemini isn't currently available" was a bad first impression.

One thing the app really needs to be able to do is auto-submit when speaking to it. It offers to replace google assistant, and after trying it out for a couple minutes, it can replace assistant, but I have to manually click to submit each instruction instead of just talking to my phone.



Funnily the top result for me after the crypto and similar apps, was ChatGPT.


It won't take long for the interest in it to bump it to the top.


I tried on iPhone, saw all the different apps that aren't Google, then re-read the announcement and saw that I should be able to see it in the Google app. So I load the Google app, but for the life of me I can't figure out how to access Gemini with it. Go online, find a news article with pictures, see that the 'switcher' above the Google logo does not appear for me, and then give up.

I can access it via gemini.google.com and I'm logged in to the iOS Google app as the correct account, no idea why I can't see the functionality on mobile. Oh well. Maybe I'll stick with OpenAI a while longer.



> Your conversations are processed by human reviewers to improve the technologies powering Gemini Apps. Don’t enter anything you wouldn’t want reviewed or used.

I appreciate them being upfront with that, but for a paid feature, it sucks that they offer no privacy or opt-out mechanism.



It seems like you can disable the data being used from training by turning off gemini app activity.

> You can turn Gemini Apps Activity off If you don’t want future conversations reviewed or used to improve machine-learning models, turn off Gemini Apps Activity Opens in a new window .



My reading of the fine print (IAAL, FWIW) is that turning off Gemini Apps Activity does not affect whether human review is possible. It just means that your prompts won't be saved beyond 72 hours, unless they are reviewed by humans, in which case they can live on indefinitely in a location separate from your account.

I also asked Gemini (not Ultra) and it told me that there is no way to prevent human review.



You should never ask an LLM to answer questions about itself. The answer is guaranteed to be hallucinated unless Google specifically finetuned it on an answer of that question. The answer it gave you is meaningless. (But also, coincidentally, correct.)


I recall seeing that OpenAI finetuned ChatGPT on facts related to itself, and I figured Google likely did the same. But you're right about not relying on its representations. I only skimmed its answer to see if it seemed consistent with my reading of the fine print.


There's a thing that says even with activity off, they retain for 72 hours for "safety and reliability"


Seems like what any reasonably sized corporation would do with an entirely new product, based on entirely new and very unreliable tech.


Could they get around this by moving the data to another party? So "they" (Google) no longer retain it?


And you can be sure it will reset with every update.


Why do you say that? I’ve never had that happen with any other of my Google data opt-outs.


I hate Google as much as the next person but, yeah, messing with opt-outs is something I've seen with Microsoft and Meta but not with Google.


That wouldn't be legal I think.


You'd think so, but these companies skirt around it by then adding or breaking up permissions even further, like "oh, yes you DID disable data collection for X, but.... we added a new permission for data collection for Y, and by the way it is opt-out! Too bad!".


LinkedIn is the master of this. They keep creating new notification types, which are enabled by default.


That's not how it works


They usually go for the "Software Bug nothing we could do." Microsoft and Meta are notorious for playing the system like that, with no recourse.


Usually? Can you share one example?


Search for Microsoft and settings reset in the search bar and you get ample examples.


Well there's a line that the sales folks at Microsoft will bring out early & everywhere


if you live in california, they almost certainly do.


If it's not running locally you have no privacy, so what they say should be assumed in all cases that something is hosted unless it somehow operates across encrypted data.

The only exception might be if the agreement explicitly prohibits the provider from doing anything with your data or even viewing it without your permission, but that's rare.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com