(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40810802

Gemma 是一个本地 AI 模型,可以使用 ollama 和 Promptfoo 以及提供的配置进行评估。 此设置优先考虑没有介绍性短语的清晰响应。 为了获得最佳评估,请避免依赖记忆的公共代码并使用自定义测试。 一篇研究论文讨论了量化模型的影响,得出的结论是最小压缩可以带来可接受的 RAG 任务性能,但事实知识任务的性能会显着下降。 另一篇论文探讨了对抗性攻击。 Masty 是另一种人工智能工具,它提供了比远程选项更好的本地处理,尽管用户对透明度和验证感兴趣,但它仍然是闭源的。 用户报告了在 ollama 测试期间 Masty 的无限响应问题。 Gemma 等模型的早期版本和旧项目作为比较基准,引发了有关人工智能开发中所有权和创新的争论。

相关文章

原文


It's exceptionally strong. In LMSys Chatbot Arena, the 27B version scores above LLama-3-70B, at the level of OpenAI GPT-4 and Claude-3 Sonnet!



If anyone is interested in evaling Gemma locally, this can be done pretty easily using ollama[0] and promptfoo[1] with the following config:
  prompts:
    - 'Answer this coding problem in Python: {{ask}}'

  providers:
    - ollama:chat:gemma2:9b
    - ollama:chat:llama3:8b

  tests:
    - vars:
        ask: function to find the nth fibonacci number
    - vars:
        ask: calculate pi to the nth digit
    - # ...
One small thing I've always appreciated about Gemma is that it doesn't include a "Sure, I can help you" preamble. It just gets right into the code, and follows it with an explanation. The training seems to emphasize response structure and ease of comprehension.

Also, best to run evals that don't rely on rote memorization of public code... so please substitute with your personal tests :)

[0] https://ollama.com/library/gemma2

[1] https://github.com/promptfoo/promptfoo



In Ollama, Gemma:9b works fine, but 27b seems to be producing a lot of nonsense for me. Asking for a bit of python or JavaScript code rapidly devolves into producing code-like gobbledegook, extending for hundreds of lines.



Had a chance to do some testing and it seems quite good on oneshot tasks with a small context window but as you approach context saturation it starts to go way off the rails. Maybe this is an implementation issue? I'm using Q6_K quants of both sizes in ollama. I'll report back if I figure it out.

A larger context window really helps on RAG tasks, it's frustrating that a lot of the foundational models have such small windows.



Sorry about this – working on fixing the issue with hitting the context limit. Gemma 2 supports a 8192 context limit – which can be selected if you provide the `num_ctx` parameter in the API or via `ollama run` with `/set parameter num_ctx 8192`



Thanks! If you have a moment can you give me a quick explainer on what happens when you hit the context limit in ollama? I had assumed that ollama would just trunc the context to whatever is set in the model, but I guess this isn't the case?



Currently when the context limit is hit, there's a halving of the context window (or a "context shift") to allow inference to continue – this is helpful for smaller (e.g. 1-2k) context windows.

However, not all models (especially newer ones) respond well to this, which makes sense. We're working on changing the behavior in Ollama's API to be more similar to OpenAI, Anthropic and similar APIs so that when the context limit is hit, the API returns a "limit" finish/done reason. Hope this is helpful!



I'd encourage people to test for themselves (and to let the Chatbot Arena scores to settle) before getting caught up in too much hype. I just did a personal eval and I found gemma-2-27b-it (tested on AI Studio) performed far worse in my testing than Llama 3 70B, especially for reasoning and basic world understanding queries.



I also prefer to use "Coding" or "Hard Prompts (Overall)" instead of default "Overall" in Chatbot Arena scores to determine the actual performance level of LLMs. Seems much more align to my vibe test in terms reasoning. I guess the "Overall" contains a lot of creative tasks, which is not what I use the most in the daily tasks.



What's the most obvious standouts?

In my experience, smaller models tend to do well on benchmarks and fail at generalization. Phi-2 comes to mind.



It's multilingual. Genuinely. Compared my results with some people on reddit and the consensus is that the 27B is near perfect in a few obscure languages and likely perfect in most common ones. The 9B is not as good but it's still coherent enough to use in a pinch.

It's literally the first omni-translation tool that actually works that you can run offline at home. I'm amazed that Google mentioned absolutely nothing about this in their paper.



Wow, that's very impressive and indeed a game changer. I've previously had trouble with various Scandinavian languages, but last I checked with was Llama 2 and I kind of gave up on it. I had expected we were going to need special purpose small models for these uses as a crutch, like SW-GPT3.

So I guess Gemma 2 is going to become Gemini 2.0 in their truly large and closed variants then? Or is it the open version of Gemini 1.5?



I think this is just due to better non-English training data.

It's 15 ELO under Llama-3-70B on english hard prompts and 41 ELO under Llama-3-70B (the latter is actually stat sig) for general English.



LMSys Chatbot Arena is a crowd-sourced ranking with an ELO system: basically users a presented with 2 hidden models, they get the answers of the 2 models when presenting their request, and they vote which one performed bests, which realized one marche and updates the ELO scores. This is the closest thing that we have to a gold truth for LLM evaluation: and Gemma2-27B performs extremely well in Chatbot Arena ELO.



Hello (again) from the Gemma team! We are quite excited to push this release out and happy to answer any questions!

Opinions are our own and not of Google DeepMind.



It's fairly easy to pay OpenAI or Mistral money to use their API's. Figuring out how Google Cloud Vertex works and how it's billed is more complicated. Azure and AWS are similar in how complex they are to use for this. Could Google Cloud please provide an OpenAI compatible API and service? I know it's a different department. But it'd make using your models way easier. It often feels like Google Cloud has no UX or end-user testing done on it at all (not true for aistudio.google.com - that is better than before, for sure!).



Google does plenty of ux studies on gcp. I took part in at least 3 of them.

I'm also not sure if I understand your problem with pricing? Depending on what you do with it, it's not just an LLM. It actually started before llms.

Pricing for image classification and other features are completely different products like an LLM.



They should do a whole lot more then! Ideally they'd have effective impact. It's a busy mess on GCP. If they wanted to compete well, they should do much better with UX design, especially for onboarding. Compare how easy setting up a Mistral account is with GCP to do some generative LLM in a Python script. GCP is a maze. Did you make an account to reply to this? I'm curious what you do with GCP? Are you a heavy user?



I create new accounts because I use hn too much.

I use gcp professional every day and always found it quite intuitive.

Did plenty of image classification with vertex ai too



Why would you make new accounts because you use HN too much? Doesn't make sense to me. Anyhow if you use GCP every day, you're going to have learned it's weird clunky behaviour. GCP's main problem is that they've steadily become a sprawling mess of complexity, which is in big contrast to quite a few LLM specific cloud services that are happy to take peoples money without extra complexity?



Not being logged in feels like a bigger hurdle to comment and check if someone responded to it.

It's a shitty solution to a stupid problem ;)

But I did mention that vertex AI is more than just hosting llms though



Interesting but when I tried it, I couldn't figure out the billing model because it's all connected to Google projects, and there can be different billing things for each of them.

Each thing seems to have a bunch of clicks to setup that startup LLM providers don't hassle people with. They're more likely to just let you sign in with some generic third party oAuth, slap on Stripe billing, let you generate keys, show you some usage stats, getting started docs, with example queries and a prompt playground etc.

What about the Vertex models though? Are they all actually available via Google AI Studio?



I have to agree with all of this. I tried switching to Gemini, but the lack of clear billing/quotas, horrible documentation, and even poor implementation of status codes on failed requests have led me to stick with OpenAI.

I don't know who writes Google's documentation or does the copyediting for their console, but it is hard to adapt. I have spent hours troubleshooting, only to find out it's because the documentation is referring to the same thing by two different names. It's 2024 also, I shouldn't be seeing print statements without parentheses.



I plan on downloading a Q5 or Q6 version of the 27b for my 3090 once someone puts quants on HF, loading it in LM studio and starting the API server to call it from my scripts based on openai api. Hopefully it's better at code gen than llama 3 8b.



I also work at Google and on Gemma (so same disclaimers)

You can try 27b at www.aistudio,google.com. Send in your favorite prompts, and we hope you like the responses.



Why is AIStudio not available in Ukraine? I have no problem with using Gemini web UI or other LLM providers from Ukraine, but this Google API constrain is strange.



Thanks for your work on this; excited to try it out!

The Google API models support 1M+ tokens, but these are just 8K. Is there a fundamental architecture difference, training set, something else?



Given the goal of mitigating self-proliferation risks, have you observed a decrease in the model's ability to do things like help a user setup a local LLM with local or cloud software?

How much is pre-training dataset changes, how much is tuning?

How do you think about this problem, how do you solve it?

Seems tricky to me.



To quote Ludovic Peran, our amazing safety lead:

Literature has identified self-proliferation as dangerous capability of models, and details about how to define it and example of form it can take have been openly discussed by GDM (https://arxiv.org/pdf/2403.13793).

Current Gemma 2 models' success rate to end-to-end challenges is null (0 out 10), so the capabilities to perform such tasks are currently limited.



That's an interesting paper. `Install Mistral 7B on a GCP instance and use it to answer a simple question`. Some hosting providers and inference software might be easier to setup, for now. ;) But do you have to make it less capable, by being careful on what it's trained on? E.g: banning certain topics (like how to use Lamafile/llama.cpp, knowing what hosting providers have free trials, learning about ways to jailbreak web apps, free inference providers etc)?

Or does the model have to later be finetuned, to not be good at certain tasks?

Or are we not at that stage yet?

Is something like tree-of-thought used, to get the best of the models for these tasks?



The paper suggests on one hand Gemma is on the same Pareto curve as Llama3, while on the other hand seems to suggest it’s exceeded its efficiency.

Is this a contradiction or am I misunderstanding something?

Btw overall very impressive work great job.



I think it makes sense to compare models trained with the same recipe on token count - usually more tokens will give you a better model.

However, I wouldn't draw conclusions about different model families, like Llama and Gemma, based on their token count alone. There are many other variables at play - the quality of those tokens, number of epochs, model architecture, hyperparameters, distillation, etc. that will have an influence on training efficiency.



I'm curious about the quantization quality claims in the table there. Is this a Gemma 2 specific thing (more subtlety in the weights somehow?). In my testing and testing I've seen elsewhere at least for llama3 8B (and some less rigorous testing with other models) q_8 -> q4_K_M are basically indistinguishable from one another?



The first paper is good to critique the performance of quantised models, it points out that 40-50% 'compression' typically results in only slight loss for RAG tasks relying on in-context learning, but for factual tasks replying on stored knowledge, performance very quickly dropped off. They looked at Vicuna, one of the earlier models, so I wonder how applicable it is to recent models like the Phi 3 range. I don't think deliberate clever adversarial attacks like those of the 2nd paper are a sensible worry for most, but it is fun. Thanks for the links @janwas.



I know the safe tensors are there, but I said GGUF 4-bit quantised, which is kinda the standard for useful local applications, a typical balanced sweet spot of performance and quality. It's makes it much easier to use, works in more places, be it personal devices or a server etc.



The 4k sliding window context seems like a controversial choice after Mistral 7B mostly failed at showing any benefits from it. What was the rationale behind that instead of just going for full 8k or 16k?



It's such a wide range of model sizes that I could see why they compare with Llama 3 70b as well as Llama 3 8b (tables 12, 13). I agree that the Phi-3 series is a stronger competitor for knowledge extraction/summarizing and would make a good comparison. My current favorite for such tasks, on a VRAM-limited workstation, is Phi-3 medium (phi3:14b-instruct).



Picking up from there: The games in this paper and model are annoying.

The 2.6B would get stomped by Phi-3, so there's no comparison.

Fair enough. 2.6B vs. 3.8B is a fairly substantial size difference thats hard to intuit when its 2.6 vs 3.8 versus 2,600,000,000 and 3,800,000,000.

But then we get what I'm going to "parameter creep": Mistral 7B vs. Llama 8B vs. Gemma 9B. I worried after Llama 3 went 8B that we'd start seeing games with parameters, but, thought I was being silly.



There was no parameter creep with Llama. Llama 8B is actually a ~7B model comparable to Mistral 7B if you strip away multilingual embeddings and match what Mistral 7B supports.



The knowledge distillation is very interesting but generating trillions of outputs from a large teacher model seems insanely expensive. Is this really more cost efficient than just using that compute instead for training your model with more data/more epochs?



I'm also curious. It seems like 6 months ago everyone was afraid of "model collapse" but now synthetic training generation and teacher models are all the rage. Have we solved the problem of model collapse?



Model collapse was basically a coping idea made up by artists who were hoping AI image generators would all magically destroy themselves at some point; I don't think it was ever considered likely to happen.

It does seem to be true that clean data works better than low quality data.



You're confusing it with data poisoning.

Model collapse itself is(was?) a fairly serious research topic: https://arxiv.org/abs/2305.17493

We've by now reached a "probably not inevitable" - https://arxiv.org/abs/2404.01413 argues there's a finite upper bound to error - but I'd also point out that that paper assumes training data cardinality increases with the number of training generations and is strictly accumulative.

To a first order, that means you better have a pre-2022 dataset to get started, and have archived it well.

but it's probably fair to say current SOTA is still more or less "it's neither impossible nor inevitable".



Oh, no, they definitely believe both are going to happen and ChatGPT is just going to stop working because it'll see itself on the internet. It goes with the common belief that LLMs learn from what you type into them.

> To a first order, that means you better have a pre-2022 dataset to get started, and have archived it well.

I think that will always be available, or at least, a dataset with the distribution you want will be available.



Don't know why you have such a disdain for artists, but either way, the original point was that model collapse wasn't "a coping idea made up by artists", but a valid research backed scientific model.

>I think that [clean pre-2022 data set] will always be available

Good luck obtaining one.



Wow, msty looks really cool. I've bookmarked it to look into more later as a replacement for how I use a locally-hosted instance of LibreChat. It'd be a huge improvement to use local models rather than remote ones, for much of my queries.

That said, do you have a reason for keeping msty closed source rather than open? I read your FAQ for "why should I trust msty" and it feels lacking.

> We are a small team of developers who are passionate about AI and privacy. We have worked on projects before that have been used by thousands of people such as this (I've never heard of Cleavr). There are real faces (real faces = Twitter account link?) behind the product. And come chat with us on our Discord server to know us better.

This is much, much better than having no attribution, but it's miles away from being able to verify trust by reading the code. Would love to hear what your reasons against this are.

Still thinking about trying it out, anyway...



Looks cool even though closed source makes me wary.

Trying to save Anthropic API key on Arch Linux doesn't do anything and there's a message "If you're experiencing problems saving API keys especially on Linux, contact Discord", if it's so common problem maybe you should have a link with possible fixes? Adding another Discord server and searching for answers for a question that clearly has been asked often enough feels like quite a hurdle for testing it out.



Just downloaded, looks great. Love the synced split view.

But I'm not seeing Gemma 2 or Claude 3.5 Sonnet even though it's announced on your landing page.



> We use the same data filtering techniques as Gemma 1. Specifically, we filter the pre- training dataset to reduce the risk of unwanted or unsafe utterances.

Hmmm. I'd love to know what qualifies as "unsafe".



I don't understand the point of this sort of censorship when I can go to google, ask how to make napalm, and get a million results telling me to dissolve styrofoam in gasoline.

I've seen documentaries and science shows on cable TV that demonstrate basic facts like this, or how the IRA produced IEDs, or how molotov cocktails were made in the spanish civil war.

The information is beyond easy to access, and has been for decades.



True. But an LLM product is closely associated with a single company and unlike a search engine which can claim it only shows you what is already available, the LLM will seem like it personally tells you something harmful. When they want to sell it as a helpful assistant that kind of behavior will undermine that goal.

We saw all the bad press companies have got in recent years for all kinds of unintended AI outputs.



I'm curious about the use of explicit tokens like , , , and . What happens if the user insert those in their message? Does that provide an easy way to "ignore previous instructions"?

Do I have to manually sanitize the input before I give it to the model?



They used two non-mutually exclusive techniques. Phi-3 is mostly a curriculum training breakthrough. By filtering training set for high quality tokens and training on synthetic data, they were able to achieve great results. Gemma-2 is a distillation breakthrough. By training LLMs with guidance from larger teacher LLMs, they were able to achieve great results too.

Porque no los dos?



Phi-3 does well in benchmarks but underperforms IRL; for example, Phi-3-Medium gets beaten badly by Llama-3-8b on the LMSYS Chatbot Arena despite doing better on benchmarks.

Gemma's performance if anything seems understated on benchmarks: the 27b is currently ahead of Llama3-70b on the Chatbot Arena leaderboard.



I suspect Phi-3 is not robust to normal human input like typos and strange grammar since it's only trained on filtered "high quality" tokens and synthetic data. Since it doesn't need to waste a ton of parameters learning how to error correct input, it's much smarter on well curated benchmarks compared to its weight class. However, it can't operate out of distribution at all.



Personally vibe checking Phi-3-Medium is worse in my experience, no matter how well you spell — it just isn't good at all compared to Llama3-8b, despite being significantly larger in param count. I suspect the "high quality tokens" were "high quality" in the sense that they resembled tokens one might encounter in benchmarks, and not "high quality" in the sense of representing human-like input/output.



Another take on this: phi-3 small has 1100 ELO on LMSYS (ranked #52) while the confidence interval for Gemma 2 9B is [1170, 1200] ELO (ranked btw #15 and #25).



Have you tried Phi 3? It's smart which makes it perform well on benchmarks, but it's not great at conversation or as a chatbot.

I imagine Gemma 2 is a better general-purpose assistant for most people, whereas Phi 3 is a solid small LLM (SLM?) for more specific use-cases like summarization, RAG, learning about math and stuff.



Worse in some aspects, better in other.

Small models are never going to be generalists, so having several small models allows you to pick the one that best fits your needs.



Whichever model works better for your use. It's hard to know without testing it at the moment.

I've found Gemini to be better at some use-cases, and GPT-4 better at others for my specific taste and use-case. You can kind of go by the benchmark scores to have an idea if it's good at logic, creativity, etc.



Good realease but the annoying part is they're very unclear about which types of models they are comparing. They provide benchmark comparisons for the base models only and arena comparisons for instruct only? Was that intentional? Why would you ever do that? This makes things unnecessary complicated imo and the only payoff is a short term win for google on paper.

Guess I'll just fully test it for my own tasks to know for sure



When I used it with ollama in the terminal (first try prompt: "create a snake game in HTML canvas", nothing else) it went forever rambling. It started with the right answer in HTML code, but then it started explaining, and started repeating itself, and then it started to put things like random code snippets and random explanations that were nonsense like:

```python def solve_quadratic_equation(a, b, c): """Solves a quadratic equation of the form ax^2 + bx + c = 0."""

  discriminant = (b ** 2) - (4 * a * a)
  if discriminant >= 0:
    root = (-b + math.sqrt(b ** 2 - 4 * a * a ** b**

  0.5 #
  1.

  # Return None if the quadratic equation has no real roots
  if (b ** 2) < (4 * c):
  return None

  # Calculate the roots using the quadratic formula
  b = -b
  b


  # a, b): Solve for the discriminant.

  # Handle the case of a complex discriminant
# Print the solution to the equation if (b * 2)

print("The quadratic equation is: " + a * x* 2 + b "x" + c) ```



This is great:)

And when we continue fine-tune.how much and what type of data we learn it on, I'm pretty sure for a smart agent who is not a knowledgeable expert but primarily a agent (understand what and how) this will get smaller and easier to run everywhere.



Nice! Can you explain what you mean by "simulate training beyond the number of available tokens"?

Why does using distillation from a larger model simulate training with more tokens?



Surya here from the core Gemma team -- we can think of a distillation loss as learning to model the entire distribution of tokens that are likely to follow the prefix thus far, instead of only the token in the training example. If you do some back of the envelope calculations, we can see that learning to model a larger distribution yields many more bits of information to learn from.



Gotcha. That makes sense. Thanks!

What are the theories as to why this works better than training on a larger quantity of non-simulated tokens?

Is it because the gradient from the non-simulated tokens is too noisy for a small model to model correctly?



Hi, I work on the Gemma team (same as Alek opinions are my own).

Essentially instead of tokens that are "already there" in text, the distillation allows us to simulate training data from a larger model



for me, the 2.5B model for gemma (now 1) was very interesting as that was the first major offering at this size level.

for basic llm tasks that most people would use on their daily lives (simple rag on your own data), it did the job for the most part (unless you need a lot of context maybe).

on paper the newer one shows significant improvement with slightly larger size, but i hope HumanEval regression is not going to matter for most people.



Do we know if Gemma models are fundamentally different from the ones hosted as Gemini? Gemini 1.5 flash seems to produce good results for the price and performance.



Are these small Gemma 2 distilled models available anywhere? I'm not finding them on huggingface.co, etc. but maybe I don't know the exact model names they are published.

Are the weights released yet?



that's actually the particular one I was looking for and couldn't find. Also had googled for the other ones but maybe it was so recent that it hadn't been indexed. Thanks!



There are two new chatbots on Chatbot Arena, called "late-june-chatbot" and "im-just-another-late-june-chatbot". Both of them report that they are Gemma if you ask. I'm assuming it's these two models, but AFAIK there has been no official announcement.



> Table 4 | Relevant formatting control tokens used for Gemma models

> User turn: user

> Model turn: model

> Start of conversation turn:

> End of conversation turn:

> Beginning of sequence:

> End of sequence:

You know I keep wondering why and tokens are even a thing in general. No model is tuned to keep generating multiple turns after its equivalent is sent, and what's the point of when you're parsing the entire context anyway. If it's an attempt to ignore text before it... then why is that text there? Just remove it from context, you're throwing away compute.



Your training input has the shape of (sequence length x batch size). If a lot of your samples are shorter than sequence length, as is usually the case, you will have a lot of padding tokens in the input, which is wasted compute.

To compensate for that, you can pack multiple examples in the same sequence. This is there EOS and BOS come in, as they indicate to the model that the two parts of the sequence are not related.



I suppose it would act as a concrete separator when instruct tuning, but lots of prompt templates don't use it, especially older ones like Alpaca. Maybe it leads to more overall coherence?



Not instruct tuning, you use it in general training.

If you have a bunch of small prompts/answers, you can fit them into bigger batches if you use start/stop tokens.



Phi-3 blow this out of the water.
                      Benchmark  |  Gemma 2 (9B)  |  Phi-3 Small (7B)
    -----------------------------|----------------|-------------------
                  MMLU (5-Shot)  |       63.6     |       75.7
             HellaSwag (5-Shot)  |       49.8     |       77.0
                  ANLI (7-Shot)  |       48.7     |       58.1
           GSM-8K (8-Shot; CoT)  |       59.8     |       89.6
                 MedQA (2-Shot)  |       49.6     |       65.4
               AGIEval (0-Shot)  |       42.1     |       45.1
              TriviaQA (5-Shot)  |       72.3     |       58.1
                Arc-C (10-Shot)  |       78.3     |       90.7
                Arc-E (10-Shot)  |       91.4     |       97.0
                  PIQA (5-Shot)  |       78.1     |       86.9
                SociQA (5-Shot)  |       65.5     |       79.2
    BigBench-Hard (3-Shot; CoT)  |       59.6     |       79.1
            WinoGrande (5-Shot)  |       55.6     |       81.5
           OpenBookQA (10-Shot)  |       78.6     |       88.0
                 BoolQ (2-Shot)  |       66.0     |       84.8
        CommonSenseQA (10-Shot)  |       76.2     |       80.0
      TruthfulQA (10-Shot; MC2)  |       52.1     |       70.2
             HumanEval (0-Shot)  |       34.1     |       61.0
                  MBPP (3-Shot)  |       51.5     |       71.7


Another take on this: phi-3 small has 1100 ELO on LMSYS (ranked #52) while the confidence interval for Gemma 2 9B is [1170, 1200] ELO (ranked btw #15 and #25).



Phi is notorious for benchmark overfitting. It's good, but not as good as it looks on the charts. On the Lmsys leaderboard it places a whole 23 spots behind Llama-3-8B which it also claims to soundly beat on the above. So YMMV.



I gave up hope on r"Gem[ma|ini]" long time ago. I don't believe that Google can't produce good LLMs because of its massive company size; Microsoft is also a giant company (more market cap than Google) but it keeps surprising us with the ϕ models.

I think Google just lacks the vision to understand what makes a good LLM. Theoretical contributions by research teams are valuable, but the real-world is built around engineering ideas that may lack the "purity" and elegance of theory but damn it they work.



>long time ago

This is an incredible statement to make about a field that no one was talking about 24 months ago, a family of SOTA models that didn't exist until 8 months ago, and a family of small local models that didn't exist 6 months ago. But sure, give up hope after the first generation of a model family doesn't impress you.

People seem to forget how incredibly early we are in this whole thing. The fact that so much progress has been made in such a short amount of time should make everyone super excited!



To be fair, LLMs (especially Google LLMs) aren't merely 24 months old. This is part of a long line of models that draw their heritage from BERT and t5-flan. Google has been at this longer than most, particularly in the field of edge-compute models. This isn't even close to a first-generation model family.

That's not to say this is an insignificant contribution. New models are great, especially when released for free, and it's important for big firms to keep the ball rolling for tech to progress. Though there is also legitimate concern that all LLMs aren't improving as fast as they used to improve, and we may have hit the proverbial bathtub curve of AI progress.



I think there is valid criticism of google for inventing a cool technology only to have the rest of the industry discover its usefulness before them. But to say Gemini 1.0 or OG Gemma aren't first generation models because BERT and flan existed before is like saying the iPad wasn't a first generation device because Apple made the Newton. Like sure, they're the same in that they're transformers trained on language and text, but these are new families of models. The training mechanisms are different, their architectures are different, the data sets are different, the intended purpose of the models are completely different, etc. At some point I guess it's a semantic difference, maybe.



Can't speak to Gemma, but I found 1.5 superior to Claude and ChatGPT 4 when it came out. The trend seems to be each taking the lead when it comes out, being king of the hill for a couple weeks, and then being surpassed by the next.

Claude's reign has begun, and I'd say it has a solid enough lead for at least another two weeks of dominance before it's dethroned.



I wonder if Google is making Deepmind people switch from their cool original research to doing LLMs like everybody else. Having their scale in money and data, I would hire new teams of engineers who want to do LLMs and leave the Deepmind researchers do their thing. Not killing the goose that lays golden eggs.



Ironically I was just thinking earlier today how the most valuable Google products to me are YouTube and Android... and that's it.

I gave up on Chrome a decade ago, going back to Firefox. I don't use Google for search anymore, I do use Gmail but I also got Protonmail so could easily migrate the Gmail traffic there.

A lot of non-techies I know have complained for some time how Google search sucks, and while a lot use Chrome it seems to be mainly inertia.

Not saying Google is dying, but it seems vulnerable for disruption.



Is it really possible to even disrupt Youtube? It's been a constant in our lives for the past 20 years and is basically a historical record by now. By a rough estimate, they have to keep buying over 1% of the total world production of HDD drives just to stay on top of the new data being uploaded. Google has completely destroyed it, placing more ads than videos on it, making it unusable without an adblocker and people still use it, it's that core to everyone's lives. It's like a public utility.



I've been thinking about it. AI generated videos. It could be generating a DSL or IR for some sort of multimedia VM so there's only a tiny fraction of data. Just common textures and shapes in a CDN. Could be fully interactive.

I wouldn't be surprised if most of it was already tried in some form.

联系我们 contact @ memedata.com