(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=41088273

简而言之,word2vec 和 glove 是生成词嵌入的有效方法,与较大的语言模型相比,它们可以快速处理大量数据。 然而,这些方法可能难以处理复杂的语言概念,例如否定和有效处理同音异义词。 虽然词嵌入对于查找相似词很有用,但使用 GPT2 等语言模型进行长文本序列的语义搜索仍然很有前途。 然而,对于需要快速高效处理的任务,传统的词嵌入由于其速度和简单性仍然是更好的选择。 提供的库 Fast Sentence Embedding 每秒处理 50 万个句子,而最快的句子\_transformers 可以在强大的 GPU 上管理大约 2 万个句子。 关于您的问题:是的,系统将每一行文本分解为单独的单词,并为每个单词使用相应的单词嵌入。 至于扩展该方法以包括跨较长文本的语义搜索,这可以通过使用 GPT2 等语言模型来实现,尽管需要进一步研究来确定最佳方法。 为了在令牌序列之间执行点积,我们需要首先聚合来自每一层每个头的各自的密钥向量。 目前,简单的词嵌入似乎更适合需要高速处理的任务。 此外,文本分析工具可以考虑使用不包含明确定义的单词的行子串来识别错误输入,尽管这可能会增加复杂性并减慢整体处理时间。 最后,关于配置方面,似乎不清楚用户应该如何设置配置; 建议直接使用环境变量,或将配置存储在“$XDG_CONFIG_HOME”或“~/.config/sgrep/config.toml”中,具体取决于具体要求。 另外,请注意,使用 TOML 格式而不是 JSON 来配置文件可以增强可用性。 最后,在最初的帖子范围之外提供改进建议将在其他地方受到赞赏。 总的来说,这个项目显示出巨大的潜力,我期待着自己尝试一下。

相关文章

原文


SIMD situation in Go is still rather abysmal, it’s likely easier to just FFI (though FFI is very slow so I guess you're stuck with ugly go asm if you are using short vectors). As usual, there's a particular high-level language that does it very well, and has standard vector similarity function nowadays...



Julia is less "general-purpose" and I know little about the quality of its codegen (it does target LLVM but I haven't seen numbers that place it exactly next to C or C++ which is the case with C#).

Mojo team's blog posts do indicate they care about optimal compiler output, and it seems to have ambitions for a wider domain of application which is why it is mentioned.



That's totally clever and sound really useful. And it's one of those ideas where you go "Why didn't I think of that" when stumbling over the materials, word2vec in this case.



The model of a word to a vector breaks down really quickly one you introduce the context and complexity of human language. That's why we went to contextual embeddings, but even they have issues.

Curious would it handle negation of trained keywords, e.g "not urgent"?



For a lot of cases, word2vec/glove still work plenty well. It also runs much faster and lighter when doing development -- the FSE library [1] does 0.5M sentences/sec on CPU, whereas the fastest sentence_transformers [2] do something like 20k sentences/sec on a V100 (!!).

For the drawbacks:

Word embeddings are only good at similarity search style queries - stuff like paraphrasing.

Negation they'll necessarily struggle with. Since word embeddings are generally summed or averaged into a sentence embedding, a negation won't shift the sentence vector space around the way it would in a LM embedding.

Also things like homonyms are issues, but this is massively overblown as a reason to use LM embeddings (at least for latin/germanic languages).

Most people use LM embeddings because they've been told it's the best thing by other people rather than benchmarking accuracy and performance for their usecase.

1. https://github.com/oborchers/Fast_Sentence_Embeddings

2. https://www.sbert.net/docs/sentence_transformer/pretrained_m...



This implementation does not because the query has to be a word. One way to extend it to phrases is to average the vectors of each word in the phrase. Another way is to have a word2vec model that embeds phrases. (The large Google News model I think does have phrases, with spaces converted to underscore). But going from words to phrases opens up a whole new can of worms - how large a phrase to consider from the input stream etc. Plus, I don't think averaging words in the phrase is the same as learning the embedding for the phrase. Sentence embedding models are necessary for that, but they are far too slow for this use case as pointed out by others.

To summarize, this is a simple implementation that works for the simplest use case - semantic matching of words.



Yep, it would almost be easier to:

- check to see if the search query contains more than one word

- spin up a more modestly sized LLM, such as mistral 7b

- ask the LLM to try to condense / find single word synonym for the user query

- send to sgrep



I wonder if it would be possible to easily add support for multiple CPUs? It seems to be taking at most 150% CPU, so on my workstation it could be (assuming high parallellism) 10 times as fast.

Alas the word2vec repository has reached its quota:

    fetch: Fetching reference refs/heads/master
    batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
    error: failed to fetch some objects from 'https://github.com/mmihaltz/word2vec-GoogleNews-vectors.git/info/lfs'

So here are another sources I found for it: https://stackoverflow.com/a/43423646

I also found https://huggingface.co/fse/word2vec-google-news-300/tree/mai... but I'm unsure if that's the correct format for this tool. The first source from Google Drive seems to work and there's little chance of being malicious..



The model in Google drive is the official model from Google and will work.

Haven't tried the huggingface model, but, looks very different. Unlikely to work.



This would be really useful if it could take a descriptive phrase or a compound phrase (like SQL 'select X and Y and Z') and match against the semantic cluster(s) that the query forms. IMO that's the greatest failing of today's search engines -- they're all one hit wonders.



Very cool!

Do I understand correctly that this works by splitting each line into words, and using the embedding for each word?

I wonder whether it might be feasible to search by semantics of longer sequences of text, using some language model (like, one of the smaller ones, like GPT2-small or something?). Like, so that if you were searching for “die”, then “kick the bucket” and “buy the farm”, could also match somehow? Though, I’m not sure what vector you would use to do the dot product with, when there is a sequence of tokens, each with associated key vectors for each head at each layer, rather than a single vector associated with a word.. Maybe one of the encoder-decoder models rather than the decoder only models?

Though, for things like grep, one probably wants things to be very fast and as lightweight as feasible, which I imagine is much more the case with word vectors (as you have here) than it would be using a whole transformer model to produce the vectors.

Maybe if one wanted to catch words that aren’t separated correctly, one could detect if the line isn’t comprised of well-separated words, and if so, find all words that appear as a substring of that line? Though maybe that would be too slow?



I wanna meet the person who greps die, kick the bucket and buy the farm lol

Are models like mistral there yet in terms of token per second generation to run a grep over millions of files?



Mistral has published large language models, not embedding models? sgrep uses Google's Word2Vec to generate embeddings of the corpus and perform similarity searches on it, given a user query.



No I got that I asked because wouldn’t embedding generated by fine tuned transformer based LLMs be more context aware? Idk much about the internals so apologies if this was a dumb thing to say



document search, an llm use-case almost all companies want for their white-collar workers, currently follows creation of a RAG.

vector search is the first step of the process, and i argue that getting top n results and letting the user process them is probably the best of both worlds without involving "AI".

this can be enhanced with multilingual support based on the encoder used. but building the index is still an expensive process and i wonder how that could be done fast for a local user.



This tool seems really cool, want to play with it for sure.

Just in general, semantic search across text seems like a much better ux for many applications. Wish it was more prevalent.



Really cool. Often just want to fuzzy search for a word, and this would be useful. Can it do filenames as well ? Or do I need to pipe something like LS first.



I might have a work use case for which this would be perfect.

Having no experience with word2vec, some reference performance numbers would be great. If I have one million PDF pages, how long is that going to take to encode? How long will it take to search? Is it CPU only or will I get a huge performance benefit if I have a GPU?



As someone working extensively with word2vec: I would recommend to set up Elasticsearch. It has support for vector embeddings, so you can process your PDF documents once, write the word2vec embeddings and PDF metadata into an index, and search that in milliseconds later on. Doing live vectorisation is neat for exploring data, but using Elasticsearch will be much more convenient in actual products!



I would personally vote for Postgres and one of the many vector indexing extensions over Elasticsearch. I think Elasticsearch can be more challenging to maintain. Certainly a matter of opinion though. Elasticsearch is a very reasonable choice.



Elasticsearch is a pain to maintain, the docs are all over the place, and the API is what you end up with if developers run free and implement everything that jumps to their mind.

But there just isn’t anything comparable when it comes to building your own search engine. Postgres with a vector extension is good if all you want to do is a vector search and some SQL (not dismissing it here, I love PG); but if you want more complex search cases, Elasticsearch is the way to go.



just played around with it, not very smart per se, to get full power of semantic grep, LLM still might be needed, how to do it? is RAG the only way?



I would think translation to other languages would be trivial. The model is just a map of word to vector. Every word is converted to it's vector representation; then the query word is compared to the input words using cosine similarity.



I assumed you meant computer languages :-) If you mean human languages, yes Google publishes word2vec embeddings in many different human languages. Not sure though how easy it is to download.



This is a good idea. I'm going to offer some unsolicited feedback here:

The configuration thing is unclear to me. I think that "current directory" means "same directory as the binary", but it could mean pwd.

Neither of those is good: configuration doesn't belong where the binaries go, and it's obviously wrong to look for configs in the working directory.

I suggest checking $XDG_CONFIG_HOME, and defaulting to `~/.config/sgrep/config.toml`.

That extension is not a typo, btw. JSON is unpleasant to edit for configuration purposes, TOML is not.

Or you could use an ENV variable directly, if the only thing that needs configuring is the model's location, that would be fine as well.

If that were the on ramp, I'd be giving feedback on the program instead. I do think it's a clever idea and I'd like to try it out.



I take a Show HN to be a solicitation for feedback.

It took me a few tries to get this to run. I tried passing -model_path, but it complained that it couldn't find config.json. I then made a config.json in the pwd (which contained the executable), but it couldn't expand "~". I then tried searching *.txt, but it couldn't find anything unless I specified just one file.

In terms of offering an experience similar to grep, it's a lot slower. It's not quite as slow with the SLIM file, but I wonder if there's some low-hanging fruit for optimization. Perhaps caching similarity for input tokens?



I would assume the argument could be made that "can be run locally", "w2v can be trained faster than other models" and "doesn't cost per-call to use" to be some arguments for why someone would try this.

联系我们 contact @ memedata.com