(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40358071

此人对使用云大型语言模型 (LLM) 的高每次交换成本表示担忧,特别是对于较长的序列。 他们质疑是否可以通过预烘焙文档在多个查询中分摊成本。 关于 G 与 OAI 定价的争论随之而来,人们担心大上下文规模下潜在的检索性能下降。 尽管承认存在局限性,但他们仍在继续尝试各种系统。 然而,在讨论中与特定个人的接触会导致个人互动,导致沮丧和分歧,可能导致话题转向人工智能道德和企业对言论的控制。 总的来说,这篇文章反映了围绕在实际应用中实施和利用高级语言模型的挑战和伦理影响的讨论。

相关文章

原文


Looking at MMLU and other benchmarks, this essentially means sub-second first-token latency with Llama 3 70B quality (but not GPT-4 / Opus), native multimodality, and 1M context.

Not bad compared to rolling your own, but among frontier models the main competitive differentiator was native multimodality. With the release of GPT-4o I'm not clear on why an organization not bound to GCP would pick Gemini. 128k context (4o) is fine unless you're processing whole books/movies at once. Is anyone doing this at scale in a way that can't be filtered down from 1M to 100k?



With 1M tokens you can dump 2000 pages of documents into the context windows before starting a chat.

Gemini's strength isn't in being able to answer logic puzzles, it's strength is in its context length. Studying for an exam? Just put the entire textbook in the chat. Need to use a dead language for an old test system with no information on the internet? Drop the 1300 page reference manual in and ask away.



How much do those input tokens cost?

According to https://ai.google.dev/pricing it's $0.70/million input tokens (for a long context). That will be per-exchange, so every little back and forth will cost around that much (if you're using a substantial portion of the context window).

And while I haven't tested Gemini, most LLMs get increasingly wonky as the context goes up, more likely to fixate, more likely to forget instructions.

That big context window could definitely be great for certain tasks (especially information extraction), but it doesn't feel like a generally useful feature.



That per exchange context cost is what really puts me off using cloud LLM for anything serious. I know batching and everything is needed in the data center, and important for keeping around KVQ cache, you basically need to fully take over machine to get an interactive session to get the context costs to scale with sequence length. So it's useful, but more in the case of a local LLaMA type situation if you want a conversation.


It makes building any app that requires generous user prompting impossible to build for regular developers (cloud pricing).

$20 hosting can serve thousands of users per month. $20 llm sub services just one person. This is fucking impossible.



They are asking if you can take the context being passed per interaction and train it into a session in real time (via an online algorithm).

Essentially bake the context passed in to the attention layer so that you can pass only the relevant chat context.

Your post wasn’t a particularly charitable interpretation.



Is there a way to amortize that cost over several queries, i.e. "pre-bake" a document into a context persisted in some form to allow cheaper follow-up queries about it?


They announced that today, calling it "context caching" - but it looks like it's only going to be available for Gemini Pro 1.5, not for Gemini Flash.

It reduces prompt costs by half for those shared prefix tokens, but you have to pay $4.50/million tokens/hour to keep that cache warm - so probably not a useful optimization for most lower traffic applications.

https://ai.google.dev/gemini-api/docs/caching



> It reduces prompt costs by half for those shared prefix tokens, but you have to pay $4.50/million tokens/hour to keep that cache warm - so probably not a useful optimization for most lower traffic applications

That's on a model with $3.5/1M input token cost, so half price on cached prefix tokens for $4.5/1M/hour breaks even at a little over 2.5 requests/hour using the cached prefix.



Depending on the output window limit, the first query could be something like: "Summarize this down to its essential details" -- then use that to feed future queries.

Tediously, it would be possible to do this chapter by chapter in order to exceed the output limit building something for future inputs.

Of course, the summary might not fulfill the same functionality as the original source document. YMMV



Can anyone speculate on how G arrived at this price, and perhaps how it contrasts with how OAI arrived at its updated pricing? (realizing it can't be held up directly to GPT x at the moment)


Isn't there retrieval degradation with such a large context size? I would still think that a RAG system on 128K is still better than No Rag + 1M context window, no? (assuming text only)


You don't really use it, right? There's no way to debug if you're doing it like this. Also, the accuracy isn't high, and it can't answer complicated questions, making it quite useless for the cost.


There's no way it's Llama 3 70b quality.

I've been trying to work Gemini 1.5 Pro into our workstream for all kinds of stuff and it is so bad. Unbelievable amount of hallucinations, especially when you introduce video or audio.

I'm not sure I can think of a single use case where a high hallucination tiny multimodal model is practical in most businesses. Without reliability it's just a toy.



> With the release of GPT-4o I'm not clear on why an organization not bound to GCP would pick Gemini.

Price for anything, particularly multimodal tasks that with OpenAI GPT-4o is the cheapest model, that doesn't need GPT-4 quality. GPT-3.5-Turbo — which itself is 1/10 the cost of GPT-4o, is $0.5/1M tokens on input, $1.50/1M on output, with a 16K context window. Gemini 1.5 Flash, for prompts up to 128K, is $0.35/1M tokens on input, and $0.53/1M tokens on output.

For tasks that require multimodality but not GPT-4 smarts (which I think includes a lot of document-processing tasks, for which GPT-4 with Vision and now GPT-4 are magical but pricy), Gemini Flash looks like close to a 95% price cut.



I tried to use the 1M tokens with Gemini a couple of months ago. It either crashed or responded ___very__ slowly and then crashed.

I tried a half dozen times and gave up, I hope this one is faster and more stable.



I guess it depends on what you want to do.

E.g. I want to send an entire code base in a context. It might not fit into 128k.

Filtering down is a complex task by itself. It's much easier to call a single API.

Regarding quality of responses, I've seen both disappointing and brilliant responses from Gemini. Do maybe worth trying. But it will probably take several iterations until it can be relied upon.



I think that's a bit like asking why would someone need a 1gb Gmail when 50mb yahoo account is clearly enough.

It means you can dump context without thinking about it twice and without needing to hack some solutions to deal with context overflow etc.

And given that most use cases most likely deal with text and not multimodal the advantage seems pretty clear imo.



Long context is a little bit different than extra email storage. Having 1 gb of storage instead of 50 mb has essentially no downside to the user experience.

But submitting 1M input tokens instead of 100k input tokens:

- Causes your costs to go up ~10x

- Causes your latency to go up ~10x (or between 1x and 10x)

- Can result in worse answers (especially if the model gets distracted by irrelevant info)

So longer context is great, yes, but it's not a no-brainer like more email storage. It brings costs. And whether those costs are worth it depends on what you're doing.



1M token context by default is the big feature here IMO, but we need better benchmarks to measure what that really means.

My intuition is that as contexts get longer we start hitting the limits of how much comprehension can be embedded in a single point of vector space, and will need better architectures for selecting the relevant portions of the context.



> 1M token context by default is the big feature here IMO, but we need better benchmarks to measure what that really means.

Multimodality in a model That's between 4-7% the cost per token of OpenAI’s cheapest multimodal model is an important feature when you are talking about production use and not just economically unsustainable demos.



The problem is that even 1.5 Pro seems completely useless for long context multimodal stuff.

I have tried it for so many use cases in video / audio and it hallucinates an unbelievable amount. More than any other model I've ever used.

So if 1.5 Pro can't even handle simple tasks without hallucination, I imagine this tiny model is even more useless.



In preview, can't be used in production, they already rug-pulled people building on Gemini w/r/t cost and RPM, and they're pointedly not putting any RPM or cost on the page. (seriously, try finding info on cost, RPM, or release right now, you're linked in circles.)

Agree on OpenAI multimodal but it's sort of a stilted example itself, it's because OpenAI has a hole in its lineup - ex. Claude Haiku is multimodal, faster, and significantly cheaper than GPT 3.5.



Given the one comment, I wouldn't draw too many conclusions re: Anthropic.* SNR ratio on AI everywhere is low-ish, and optimizing for a low cost multimodal model is a business-directed need that's rather niche currently.

* I was very pleasantly surprised to see it ranking as high as #4, and its still top #50, in App Store.



What do you mean?

A) This fills a gaping hole for cheap multimodal models, OpenAI doesn't have one

B) Anthropic's Haiku is a good choice.

You) wow A didn't know Anthropic. Goes to show power of brands, much like snack foods

B) Eh I wouldn't conclude anything from A. Its one comment. some people don't know what an Anthropic is because there's high interest in AI relative to interest in AI APIs. you can expect a low SNR ratio, even on HN

You) Stop personally attacking me! It's against the rules!!



I don't think its helpful to broaden the aspersions: you had a couple hours to cool off, and also got signal from the community that you went overboard.

It's clear I was talking to you, not about you, you can tell because it's a reply to you.

I do understand it would feel awful if I replied to you, ignoring you, and instead voicing to some anon 3rd person you don't know Anthropic pricing. Someone with looking to bully could possibly use that to say you are an amateur who doesn't know what you're talking about w.r.t to AI and only knows a brand name or two.

If anyone does that, let me know, I'll correct their bullying...pulls on sunglasses...with extreme prejudice.



> signal from the community that you went overboard.

What signal are you referring to?

That you and one other person are making low brow comments that do not enhance the conversation?



> What signal are you referring to?

1. The comment where you began getting upset and attacking people is grayed out, the only one in this thread.

2. The other person straightforwardly explaining to you I was replying to you, not talking about you. (which, I do understand why you're taking personally, but humbly, it isn't a rule violation or close, or "low-brow", it's well within bounds of adult conversation)

3. Me agreeing the reply was a reply, I don't know you or judge you, and if anyone does, I gave a nice playful comment telling you I'd fight for your honor.

I humbly suggest taking a step back from this thread: it can be incredibly frustrating when you feel you were wronged and no one is listening, I feel ya. I tried giving you an e-hug without being condescending with the sunglasses comment. We both agree its outside the bounds of HN to continue this. Yell at me on Twitter for being sneaky and lying and saying you're bad at AI and making low-brow comments, @jpohhhh.



>My intuition is that as contexts get longer we start hitting the limits of how much comprehension can be embedded in a single point of vector space, and will need better architectures for selecting the relevant portions of the context.

We are dealing with multi-headed attention, therefore we have multiple points per token. You can always increase the number of heads or the size of the key vector.



> My intuition is that as contexts get longer we start hitting the limits of how much comprehension can be embedded in a single point of vector space, and will need better architectures for selecting the relevant portions of the context.

Is it possible to explain what this means in a way that somebody only roughly familiar with vectors and vector databases? Or recommend an article or further reading on the topic?



Limitations of single point in vector space of what dimension?

I’m not sure it’s public knowledge, but it’s an architecture choice. They choose how big to make the embedding dimension.

My point is just that there’s no limitation in principle, it’s just a matter of how they design it and resource constraints.



Thanks for responding to that point - it's the one most on my mind.

So OpenAI's large embedding model has 3072 dimensions, though in practice far fewer are probably used. Clearly you can't compress 1M tokens down to 3072. Yet those 3072 numbers are all you've got for capturing the full meaning of the previous token when predicting the next one; including all 1M tokens of modifying context.

So perhaps human language is simply never complex enough to need more than 3072 numbers to represent a given train of thought, but that doesn't seem clear to me.

Edit: Since Gemini is relevant here, it looks like their text embedding model is 768 dimensions.



Yes but we can distinguish between embedding provided to customers, and internal embeddings. One is optimized for usage in certain types of applications, but the internal embeddings need to be optimized to support long contexts and are not constrained by the customer facing embeddings.

For example, with Open AI I believe it’s known that the internal dimension for Gpt3 was 12,288.



So perhaps human language is simply never complex enough to need more than 3072 numbers to represent a given train of thought, but that doesn't seem clear to me.

Will compute allow that number to go up? Or is that an optimal number?



Yeah it's not very good in practice, you can get a halfway decent demo out of it ("look I gave it 6.5 harry potters and it made an SVG map connecting characters with annotations!!"...some of the characters...spare annotations...cost $20). Just good enough to fool you a couple times when you try to make it work 10 times.


A lightweight model that you can only use in the cloud? That is amusing. These tech megacorps are really intent on owning your usage of AI. But we must not let that be the future.


Now that context length seems abundant for most tasks, I'm wondering why sub-word tokens are still used. I'm really curious how character-based LLMs would compare. With 2 M context, the compute bottleneck fades away. I'm not sure though what role the vocabulary size has. Maybe a large size is critical, since the embedding already contains a big chunk of the knowledge. On the other hand, using a character-based vocabulary would solve multiple problems, I think, like glitch tokens and possibly things like arithmetic and rhyming capabilities. Implementing sub-word tokenizers correctly and training them seems also quite complex. On a character level this should be trivial.


The attention mechanism is vastly more efficient to train when it can attend to larger, more meaningful tokens. For inference servers, a significant amount of memory goes into the KV cache, and as you note, to build up the embedding through attention would then require correlating far more tokens, each of which is "less meaningful".

I think we may get to this point eventually, in the limit we will want multimodal LLMs that understand images and sounds down to the pixel and frequency, and it seems like for text, too, we will eventually want that as well.



Maybe you could just use a good-old 1D-CNN for the bottom 3-4 layers. Then the model has been able to combine characters into roughly token length chunks anyway.

Just make sure to have some big MLPs at the start too, to enrich the "tokens" with the information currently stored in the embedding tables.



> On a character level this should be trivial.

Characters are not the semantic components of words—these are syllables. Generally speaking, anyway. I've got to imagine this approach would yield higher quality results than the roman alphabet. I'm curious if this could be tested by just looking at how LLMs handle English vs Chinese.



Only in languages that have morphemes! This is hardly a universal attribute of language so much as an attribute of those that use an alphabet to encode sounds. It makes more sense to just bypass the encoding and directly consider the speech.

Besides, considering morphemes as semantic often results in a completely different meaning than we actually intend. We aren't trying to train a chatbot to speak in prefixes and suffixes, we're trying to train a chatbot to speak in natural language, even if it is encoded to latin script before output.



That's technically wrong. Every language has morphemes for the simple reason that every word is at least one morpheme. `cat` is a morpheme. `cats` is two morphemes (cat-s).

(The point about semantics is also technically wrong. You would first need to specify your view of semantic compositionality before such a point can be evaluated, but the usual views of semantics don't have any such consequence.)



I would say 2 big problems are:

1. latency, which would get worse if you have to sequentially generate more output

2. These models very roughly turn tokens -> "average meaning" on the embedding layer, followed by attention layers that combine the meanings, and feed forward layers that match the current meaning combination to some kind of learned archetype/prototype almost. When you move from word parts to characters all of that becomes more confusing (what's the average meaning of a?) and so I don't think there are good enough techniques to learn character-based models yet



In AI music generation we have much better results with large vocabulary sizes of 10^6 order, my uneducated guess is that's because transformers are not universal pattern recognizers, they can catch patterns on a certain granularity level only.


I will say Google certainly has the better branding team. I like Gemini, Gems, and so on. “ChatGPT” is quite a clunky mess. OpenAI just feels like a faceless entity.

All things that could change but seems late in the game at this point. They certainly had the money to be more creative as they came to market.



OpenAI desperately needs a marketing consult.

"GPT4o"? Seriously?

Even "GPT4 Omni" is easier in conversation, and that's what the "o" stands for!

They severely underestimate the number of casual users they have.



OpenAI doesn’t need marketing because everybody knows who’s the best. Same reason that if I asked you what’s the best violin you would say Stradivari, even though you’ve never seen an ad for one.

OpenAI could call their model the “[poo emoji] 5000” for all the difference it would make.



Goggle “Gemini” is a much better product name than any name OpenAI has, but the Gemini product family could use some structure:
  Gemini Advanced (“with Ultra 1.0”)
  Gemini Ultra
  Gemini Pro
  Gemini Flash
  Gemini Nano-1
  Gemini Nano-2


15.3 On NYT Connections benchmark:

GPT-4 turbo (gpt-4-0125-preview) 31.0

GPT-4o 30.7

GPT-4 turbo (gpt-4-turbo-2024-04-09) 29.7

GPT-4 turbo (gpt-4-1106-preview) 28.8

Claude 3 Opus 27.3

GPT-4 (0613) 26.1

Llama 3 Instruct 70B 24.0

Gemini Pro 1.5 19.9

Mistral Large 17.7

-----> Gemini 1.5 Flash 15.3

Mistral Medium 15.0

Gemini Pro 1.0 14.2

Llama 3 Instruct 8B 12.3

Mixtral-8x22B Instruct 12.2



If Gemini Flash is just faster Gemini, then I would say that bad answers aren't better when delivered more quickly.

I ran Gemini Pro side by side with ChatGPT 4 for a few months on practical coding, systems architecture, and occasional general questions. ChatGPT was more useful at least 80% of the time. Gemini was either wrong or laboriously meandering in reaching a useful answer that it wasn't worth using, in my experience.

Faster isn't what I needed... Maybe it's also "smarter" (more useful) too now?



Presumably we are defining smartness as doing more with less, so this indicates they have something going on in the latent space which will scale.


Price (input) $0.35 / 1 million tokens (for prompts up to 128K tokens) $0.70 / 1 million tokens (for prompts longer than 128K)

Price (output) $0.53 / 1 million tokens (for prompts up to 128K tokens) $1.05 / 1 million tokens (for prompts longer than 128K)

---

Compared to GPT-3.5 Turbo

Input US$0.50 / 1M tokens Output US$1.50 / 1M tokens



It's ironic that when you ask these AI chatbots what their own context size is, they don't know. ChatGPT doesn't even know about 4o existing in 4o.


The models didn't exist when their training data was collected.

But... that's not really an excuse any more. Model vendors should understand now that the most natural thing in the world is for people to ask models directly about their own abilities and architecture.

I think models should have a final layer of fine-tuning or even system prompting to help them answer these kinds of questions in a useful way.



I think "yes" is the most likely answer here

animals have a lot more intelligence than they typically get attributed

Tool use, names, language, social structure and behavior, even drug use has been shown across many species



He meant something more meta I believe. Knowing you are a monkey is one thing, and knowing that you know you are a monkey is a another thing. It's about being cognisant of the fact that there is something called knowledge and you have it


The website talks about a specific benchmark:

> Python code generation. Held out dataset HumanEval-like, not leaked on the web

What I find interesting here is that for this particular benchmark _not_ publishing the benchmark is advertised as a feature (instead of as a sign of 'trust me, bro, we have a great benchmark'), and I can understand why. Still these are strange times we live in.



Will wait for Meta to release Flash equivalent weights.

Multi-Modal modals running offline on mobile devices with millisecond latencies per token seems the future.

Where is Apple in all of this. Why is Siri still so shit?



I've been diligently trying to use Gemini 1.5 Pro, and it is not even on the level of Llama3-70B. I really hope Gemini improves, even if it gets reduced context length.


Uh guys, yeah.. Adobe are on the phone saying something about trademark infringement, apparently Flash is something else? I don't know, I've never heard of it..


Interestingly, until your comment I hadn't made any connection with old Flash, even though I spent hundreds of hours making Flash games.

This suggests names don't stick around for long and can be re-used. Perhaps Google could bring back "Buzz" and "Wave" since enough time has passed!



Last I checked you could disable the safety triggers as an API user with gemini (which doesn't alleviate your obligation to follow the TOS as to the uses of the model).


I'm not working with a company that can just write in the ToS "we can do anything we want. lol. lmao" and expect me to follow it religiously. Corporations need less control over speech, not more.


I mean, you are using a service they're providing - many would say they they're exercising their rights by gatekeeping how it's used. There are pretty good models out there you could use however you want for your own purpose, whatever it is. I occasionally fine-tune Mixtral on HN posts+comments and chat with comments. An emergent Dang actually once told me off for flame-baiting a free speech comment.


We're witnessing a race to the bottom on pricing as it's happening. Competition based solely or mainly on pricing is a defining characteristic of a commodity market, i.e., a market in which competing products are interchangeable, and buyers are happy to switch to the cheapest option for a given level of quality.

There's an old saying that if you're selling a commodity, "you can only be as smart as your dumbest competitor."

If we want to be more polite, we could say instead: "you can only price your service as high as your lowest-cost competitor."

It seems that a lot of capital that has been "invested" to train AI models is, ahem, unlikely ever to be recovered.



But the race to the bottom has an opposition right?

So people expect to see a return of investment which will create the bottom of pricing (at least as soon as the old money ran out)

I'm also curious if AI is a good example because ai will become fundamental. This means if you don't invest you might be gone therefore it's more like a fee in case the investment would not pan out.



Google is building on top of and integrated with their cloud offerings. Having first party solutions like this gives big cloud customers an easy way to integrate. For Google it’s just another tool in the chest that gets sold to these big enterprises. Many go all in on all the same cloud products. Also the models are only the building blocks. Other cloud products at Google will be built with this and sold as a service

Not so sure about Open AI though…



Is infrastructure and scale not an expression of technical ability? It should have been obvious that Meta and Google would bury a tiny company with less than 1000 employees given the amount of capital they can leverage for compute, talent, and data. Google literally invented GPT.


Then why imply that it is a commodity because they (partly) compete on price?

Fungibility is the defining characteristic of commodities. While these products can be used to accomplish the same task, we're not near real fungibility yet.



Products that are fungible compete on price (what else?). Chat-with-AI services that have similar performance are pretty fungible today. Switching from one to the other is... remarkably easy. The moment Gemini Flash's competitors start losing customers they will lower their prices to remain competitive.


It's absolutely unconscionable that Gemini Ultra got memory-holed. I can't trust anything that Google says about benchmarks.

It seemingly existed only so in December 2023, Gemini ~= GPT-4. (April 2023 version) (on paper) ("32-shot CoT" vs. 5-shot GPT-4)



Anyone who uses both products regularly will tell you that Gemini Advanced is far behind GPT-4 and Claude 3 Opus.

Pretending that they have a model internally that's on par but they're not releasing it is a very "my girlfriend goes to another school" move and makes no sense if they're a business that's actually trying to compete.



And?

You're replying to a comment that points out Gemini Ultra was never released, wasn't mentioned today, and it's the only model Google's benchmarking at GPT-4 level. They didn't say anything about feelings or context window.



It says it may fall back to a worse model under load and there is no way to tell which you are getting. I think chatgpt has at times done something similar though.


> You're replying to a comment that points out Gemini Ultra was never released

What are you even talking about? How do you know it's memory-holed if you haven't used it? The API is not GA, but the model can be used through the chatbot subscription. GP is talking about their lack of trust on Google's claim of 1M context token, not GPT-4 level reasoning. If you're expect GPT-4 level performance with cost-efficient models, that's another problem.

联系我们 contact @ memedata.com