(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43954178

这篇Hacker News讨论围绕着JEP 515展开,这是一个针对OpenJDK的提案,旨在通过提前(AOT)方法分析来改进Java的启动时间。评论者强调了这项功能对Java标准库,特别是Streams流式处理的潜在好处,因为它允许预编译。JEP的目标是加快JVM执行的初始阶段, potentially 为Java应用程序提供“Graal级别”的启动性能。 一些人认为这对短暂运行的Java应用程序(例如lambda风格的部署)有显著的提升,而另一些人则认为对长期运行的服务器应用程序的影响微乎其微。讨论还涉及到现有的AOT技术,例如Graal,以及OpenJ9/Websphere Real-Time JVM等商业JVM中的类似功能。一些人建议这种方法可能适用于其他语言,例如Python。 缓存本地代码以消除热点函数的JIT开销的可能性也受到了考虑。

相关文章
  • (评论) 2025-04-15
  • JEP 515:运行时方法剖析 2025-05-11
  • (评论) 2025-05-10
  • (评论) 2025-05-12
  • (评论) 2025-03-30

  • 原文
    Hacker News new | past | comments | ask | show | jobs | submit login
    JEP 515: Ahead-of-Time Method Profiling (openjdk.org)
    99 points by cempaka 1 day ago | hide | past | favorite | 10 comments










    The most impact will be achieved on java standard library, like Streams (cited in the article). Right now, although their behavior is well stablished and they are mostly used in the "factory" mode (no user subclassing or implementation of the stream api), they cannot be shipped with the JVM already compiled.

    If you can find a way (which this JEP is one way) to make the bulk of the java standard api AOT compiled, then java programs will be faster (much faster).

    Also, the JVM is already an engine marvel (java JIT code is fast as hell), but this will make java programs much nimbler.



    I assume you meant with the AOT argument: "The initial few minutes of a JVM's existence, which would be the entire lifetime if you're using java the way you use e.g. your average executable in your `/usr/bin` dir".

    Saying "java programs will be faster" is perhaps a bit misleading to those who don't know how java works. This will speed up only the first moments of a JVM execution, nothing more. Or, I misread the JEP, in which case I'd owe you one if you can explain what I missed.

    As a java developer this will be lightly convenient when developing. We go through JVM warmup a lot more than your average user ever does. Personally I think I'm on the low end (I like debuggers, and I don't use TDD-style "what I work on is dictated by a unit test run and thus I rerun the tests a lot during development". But still it excites me somewhat, so that should mean your average java dev should be excited quite a bit by this.

    I am not all that experienced in it, but I gather that lambda-style java deployments (self contained simple apps that run on demand and could in theory be operating on a 'lets boot up a JVM to run this tiny job which won't last more than half a second') have looong ago moved on from actually booting JVMs for every job, such as by using Graal, an existing AOT tool. But if you weren't using those, hoo boy. This gives every java app 'graal level bootup' for as far as I can tell effectively free (a smidge of disk space to store the profile).

    For the kinds of java deployments I'm more familiar with (a server that boots as the box boots and stays running until a reboot is needed to update deps or the app itself), this probably won't cause a noticable performance boost.



    I thought Graal was going to slowly replace HotSpot?


    There was talk of the graal jit replacing C2, but native image will never replace HotSpot.


    OpenJ9 has had some of this type of functionality for a while now. Glad to see the difference between interpreted and compiled languages continue to get fuzzier.


    Even longer than that, OpenJ9 AOT capabilities, and JIT cache, go back to the Websphere Real-Time JVM, whose branding had nothing to do with J2EE application server.

    Most documentation is gone from the Internet, I was able to dig one of the old manuals,

    https://ftpmirror.your.org/pub/misc/ftp.software.ibm.com/sof...

    These kind of features have been available in commercial JVMs like those for a while now, what the community is finally getting are free beer versions of such capabilities.



    Would be interesting if the Faster Python team considered this approach for Python (although maybe they already did?)


    Faint echoes of the very first optimizing compiler, Fortran I, which did a monte carlo simulation of the flow graph to attempt to detect hot spots in the flow graph so it could allocate registers to inner loops first.


    Is this similar/the same as Azul Zing’s ReadyNow feature?


    in addition to storing profiles, what about caching some native code? so that we can eliminate the JIT overhead for hot functions

    EDIT: they describe this in their "Alternative" section as future work







    Consider applying for YC's Summer 2025 batch! Applications are open till May 13


    Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



    Search:
    联系我们 contact @ memedata.com