(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=39877391

该用户最初对一篇讨论名为 LLM 的新光学字符识别 (OCR) 工具的文章印象深刻。 然而,经过仔细检查,他们发现 LLM 本质上是 Tesseract.js 的包装器,Tesseract.js 是一个支持 WebAssembly 的开源 OCR 库。 作者使用LLM来简化将Tesseract.js集成到他们的项目中的过程,但实际的OCR处理是由Tesseract.js处理的。 用户承认这篇文章并非故意误导,但希望作者一开始就对 Tesseract.js 给予更多的认可,考虑到它对整体功能的重要性。 他们还讨论了他们使用各种 OCR 工具的经验及其局限性,特别是在表格提取方面。 总之,虽然 LLM 在该项目中发挥了作用,但真正的明星是 Tesseract.js。

相关文章

原文


I was really impressed until I realized that the app is basically a wrapper around tesseract.js, which is the actually cool part. Tesseract has a wasm port that can operate inside of a webworker.

Not saying that the article was being misleading about this, just saying that the LLM part is basically doing some standard interfacing and HTML/CSS/JS around that core engine, which wasn’t immediately obvious to me when scanning the screenshots.



The LLM part is almost irrelevant to the final result to be honest: I used LLMs to help me build an initial prototype in five minutes that would otherwise have taken me about an hour, but the code really isn't very complex.

The point here is more about highlighting that browsers can do this stuff, and it doesn't take much to wire it all together into a useful interface.



Simon - hope you don't mind me commenting on you in third person in relation to the above. Simon is a great explainer, but I wish he would credit the underlying technology or library (like tesseract.js) a bit more upfront, like you.

It matters in this case because for tesseract, the exact model is incredibly important. For example, v4 is pretty bad (but what is available on most linux distros when ran serverside) whereas v5 is decent. So I would have had a more accurate interest in this post if it was a bit more upfront that "Tesseract.js lets you run OCR against PDFs fairly quickly now, largely because of better processors we as devs have, not because of any real software change in the last 2-3 years".

I felt this before for his NLP content too - but clearly it works because he's such a great explainer and one for teasing content later that you do read it! I must say I've never been left confused by Simons work.



You act like you were misled, but the article, within the first few sentences, says he realized the tools are available to do this (including naming tesseract.js explicitly!), he just needed to glue them together. Then he details how he does that, and only then mentions he used an LLM to help him in that process. The author's article title is equally not misleading.

Was an earlier headline or subtitle here on HN what was misleading, but then that was changed to not be misleading?



I recently built a similar tool except it’s configured to use some deep learning libraries for the table extraction. I’m excited to integrate unitable which has state of the art performance later this week.

I built this because most of the basic layout detection libraries have terrible performance on anything non trivial. Deep learning is really the long term solution here.

https://github.com/Filimoa/open-parse



> Use LLMs to spell/grammar check and perhaps even auto-complete missing pieces?

I would really want human review. Remember that copier that changed digits because it was being clever with compression?



I've been trying out alternative versions of this that pass images through to e.g. the Claude 3 vision models, but they're harder to share with people because they need an API key!


This is nice but Tesseract does not perform well when it comes to tables, at least when I tried it on multiple documents.

It would miss some cells from a table, or does not recognise all the numbers when they have commas.



Tables are still the big unsolved problem for me.

There are a ton of potential tools out there like Tabula and AWS Textract table mode but none of them have felt like the perfect solution.

I've been trying Gemini Pro 1.5 and Claude 3 Opus and they looked like they worked... but in both cases I spotted them getting confused and copying in numbers form the wrong rows.

I think the best I've tried is the camera import mode in iOS Excel! Just wish there was an API for calling that one programmatically.



The example on the Tesseract.js page shows it highlighting the rectangles of where the selected text originated. Does this level of information get surfaced through the library for consumption?

I just grabbed a two-column academic PDF, which performed as well as you would expect. If I was returned a json list of text + coordinates, I could do some dirty munging (eg footer is anything below this y index, column 1 is between these x ranges, column 2 is between these other x ranges) to self-assemble it a bit better.



Yes it does, but I've not dug into the more sophisticated parts of the API at all yet. I'm using it in the most basic way possible right now:
    const {data: {text}} = await worker.recognize(imageUrl);


Tesseract is way outdated though, to the point of being borderline useless when compared to alternatives. What’s the current deep learning based FOSS SOTA, does anyone know? I want something that does what FineReader does - create a high quality searchable text underlay for scanned PDFs.


This is cool! I've also used tesseract OCR and found it to be pretty amazing in terms of speed and accuracy.

I use it for ingest of image and pdf type files for my own website chatting tool: tinydesk.ai!

I run the backend on an express js server so all js as well.

Smaller docs I do on the client side, but larger ones (>1.5mb) I've found take forever so those process in the backend.



My s3-ocr tool could do that with quite a bit of extra configuration.

https://github.com/simonw/s3-ocr

You would need to upload them all to S3 first though, which is a bit of a pain just to run OCR (that's Textract's fault).

You could try stitching together a bunch of scripts to run the CLI version of Tesseract locally.



The amazing thing here is that this tool is almost all compiled using LLM. This is very exciting. I have been using GPT-4 a lot lately to make tiny utilities. Things I wouldn't have even tried because of how much effort it takes to get started on those simple things.

I always wanted to make a chrome extension for one thing or another, but all the learning involved around the boilerplate always drained the motivation. But with GPT I built the initial POC in an hour and then polished and published it on store even. Recently I compiled some bash and cmd helper scripts, I don't know either of these enough (do know some bash) and don't have it in me to learn them. Specially the windows batch scripts. Using LLM it was matter of an hour to write a script for my need as either a windows batch script or even bash script.

Oh I even used GPT it to write 2-3 AutoHotKey scripts. LLMs are amazing. If you know what you are looking for, you can direct them to your advantage.

Very exciting to see that people are using LLMs similarly to build things they want and how they want.



Why would that suggest I'm being paid by anyone?

Oh I think I see. No, I'm not being paid to promote LLMs.

The point of my blog post was two-fold: first, to introduce the OCR tool I built. And second, to provide yet another documented example of how I use LLMs in my daily development work.

The tool doesn't use LLMs itself, they were just a useful speed-up in building it.

It's part of a series of posts, see also: https://simonwillison.net/tags/aiassistedprogramming/



For now, I'm still using OCRmyPDF as it maybe slow but incredible usefull.

The files become big but it just works.

If an alternative is quicker / lighter I will use it but it must just works.

联系我们 contact @ memedata.com