(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40610641

本文表达了对将数字图像转换为 ASCII 艺术剪影的网站的钦佩。 作者分享了他们使用 Unicode 块元素和 ANSI 颜色创建类似艺术品的经验。 他们提出了改进建议,包括没有特有的摆动效果的稳定版本以及更好地处理非等宽字体。 此外,他们分享了与早期计算机图形相关的童年记忆,并讨论了该技术的潜在应用,例如在终端界面中使用它或创建怪异的视觉效果。 总体而言,文本反映了对将图像转化为文本表示的技术创造力的迷恋。

相关文章

原文


This is indeed a fascinating page! The use of ASCII art to create silhouettes is quite creative. Do you know if there are any other similar projects or tools that allow for such artistic expressions using text?



ChatGPT? Seriously though, this is such a weird reply and doesn't fit at all with the account's previous comments. Also has that kind of not-quote-right feel that a lot of ai generated content has



Very cool stuff! I wrote something vaguely similar recently that displays images in the terminal using Unicode block elements and 24 bit ANSI colors, but I just assume two pixel per character. I support scaling and animated GIFs: https://github.com/panzi/ansi-img#readme

But those character based logos somehow look more impressive. My thing just looks low-res. XD



That's really fun. I love it! One recommendation: it might be nice to add a mode where it doesn't wobble, but retains the cuteness that results from thickening and rounding corners.



The web version never finishes (Chromium-based browser), but the CLI version works.

https://meatfighter.com/ascii-silhouettify/spa/index.html#/

Obligatory pandering:

    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@T@@@@L@@@@@@@@@@@Wg@@@@/@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@\@@@@L@@@@@@@@@DJ@@@@'@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@\@@@@\@@@@@@@Dj@@@@,@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@'@@@@\@@@@@@{@@@@,@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@'@@@@\@@@WJ@@@@/@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@,@@@@,@DJ@@@@/@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@\@@@@\/@@@W/@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@L@@@@@@@Pj@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@LQ@@@@PA@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@ @@@@[@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@ @@@@[@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@ @@@@[@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@ @@@@[@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@ @@@@[@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@ @@@@[@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@g
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@N


Nice.

I wonder if you could do something creepy by starting with a standard log tail output and then slowly introducing more and more visible patterns before, I dunno, a scary face appearing out of the text.



'What has been, it is what will be, And what has been done, it is what will be done. So there is nothing new under the sun.'

Nicely done. We used to print birthday banners and pictures using EBCDIC & ASCII on continuous-paper band- then dot-matrix printers, in a very similar fashion, recognizing the 'density' of characters, once printed.



Yeah, what happened to it. Or not happened. I npm/yarn link my own cli repos everywhere because setting up anything else takes hours and is a mess. I even develop python scripts with `nodemon main.py` (lol) because only n-guys know what people need, it seems.



Those ascii art headers don't look correct on my phone. I'm using Firefox on Android, so that might only affect a limited group of people. But I think it should just be working with a

 tag and a monospace font, right?



This is awesome. I love ASCII (and ANSI) art, but recently have been working on creating forms for Space Station 14. Sadly, SS14 does not use a monospaced font for papers.

I have been using ASCGEN2, which lets me specify a font, but this seems much nicer. Does anyone know if there's something similar that lets you specify a font and try to find the best fit?



This. I walked into the computer room with my mom when I was a little kid. There was a dude printing out a pinup and taking pictures of it as the lines fed. My first exposure to porn.



It’s nice how / forms a neat edge on parts of the Disney logo. I imagine that effect must be sensitive to grid alignment. It would be helpful to have a live preview for choosing the most aesthetically pleasing alignment.



This is really neat. Have always wondered how these work under the hood / the algorithm behind them.

Would love a web version that's easily usable though; as much as I live in the terminal, just too much of a mental burden installing a new package.



A word to the site operator: the examples page is not rendering in a monospaced font for me (iOS with lockdown enabled), perhaps try including a safe css fallback monospaced font?



Definitely worse because not every character maps to a pixel. The use of different characters is designed to represent spaces that have groupings of different coloured pixels in different configurations.

You could try to output everything using the ASCII block character and that would give you a close approximation.



On stock rPi5 running this takes > 3 seconds. Three seconds to render 370 x 370, 8-bit/color RGBA image to ASCII on a 2.4GHz CPU. And this is my lead-in to rant about neofetch, which takes about 0.2 seconds to run on the same Pi (see below), which would also be the time it would slow down opening a shell should I put neofetch into my .profile. Lastly, it takes cat to cat output of neofetch to /dev/null about ~0.01 seconds, which also is the time that neofetch should probably take to run (and really, this tool too).
  $ time ascii-silhouettify -i neofetch-1.png > /dev/null
  real 0m1.817s
  user 0m3.541s
  sys 0m0.273s
  $ time neofetch > out.txt
  real 0m0.192s
  user 0m0.118s
  sys 0m0.079s
  $ time cat out.txt > time
  real 0m0.001s
  user 0m0.001s
  sys 0m0.000s


Surely the use case for this tool is to precompile your image into ASCII and then just output that on every shell start up, right? There’s no reason to convert the image every time.



I would assume that performance wasn't the prime concern, but rather the accuracy/appearance of the generated image. Most people aren't putting this in their shell startup, just as most people aren't putting an ffmpeg encode command in their shell startup.

And I would assume neofetch is relatively slow because getting some of the system information is relatively slow. e.g. to get the GPU name it does "lspci -mm":

  % time lspci -mm >/dev/null
  lspci -mm > /dev/null  0.03s user 0.03s system 2% cpu 2.993 total
  % time lspci -mm >/dev/null
  lspci -mm > /dev/null  0.03s user 0.01s system 76% cpu 0.053 total
Guess it's faster that second time due to kernel cache or whatnot, but 50ms is still fairly slow. And that's only the GPU.


The algorithm involved is actually very hefty: for each cell of a 9px by 15px grid over the image, compare each pixel of the cell to its equivalent pixel in each of the 95 ascii characters. To solve for optimal grid alignment, it repeats this for each of the 9x15 possible positionings of the image under the grid.

联系我们 contact @ memedata.com