(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=39371669

是,对的。 作者强调,Apple 芯片通过模拟某些功能来支持 OpenGL 4.6,而不是直接支持它,这与该 API 的传统硬件加速模型显着不同。 这就引发了有关这种方法固有的潜在性能权衡和限制的问题,特别是对于严重依赖图形处理的游戏和多媒体应用程序。 此外,作者指出,Asahi Linux 尚未达到与传统 BSD 发行版相当的水平,这表明 Linux 需要进一步改进和完善才能充分利用 Apple 芯片硬件的功能和优势。 最终,这凸显了与构建开源驱动程序和维护与 OpenGL 4.6 等专有标准的兼容性相关的挑战,并强调了成功弥合这一差距所需的持续努力。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Conformant OpenGL 4.6 on the M1 (rosenzweig.io)
413 points by patadune 18 hours ago | hide | past | favorite | 83 comments










Alyssa Rosenzweig is a gift to the community that keeps on giving. Every one of her blog posts is a guarantee to learn something you didn't know about the internals of modern graphics hardware.


This endeavour proofs to me skills beat talkativeness every single day. Just reading the blogs sets my brain on fire. There is so much to unpack. The punch line is not the last but the second sentence, nevertheless you're forced to follow the path into the rabbit hole until you enjoy reading one bit manipulation after the other.

If there ever are benchmarks with eureka effects per paragraph Alyssa will lead them all.

Just thanks!



One day, Apple will deprecate opengl 3.3 core, and I guess everybody might end up deprecating it.

I've read that generally opengl is just easier to use than vulkan, I don't know if that's true, but if something is too complicated, it becomes just too hard for less experienced devs to exploit those GPU, and it becomes a barrier to entry, which might discourage some indie game developers.

Although everyone uses unity and unreal now, baking things from scratch or using other engines is just weird now, for some reason. It's really annoying, and it's fun to see gamedev wake up after unity tried to lock things more.

Open source in gaming has always been stretched thin. Godot is there, but I doubt it's able to seriously compete with unity and unreal even if I want it to, so even if godot is capable, indie gamedevs are more experienced with unity and unreal and will stick to those.

The state of open source in game dev feels really hopeless sometimes, the rise of next gen graphics API are not making things easy.



> I've read that generally opengl is just easier to use than vulkan

[here's](https://learnopengl.com/code_viewer_gh.php?code=src/1.gettin...) an opengl triangle rendering example code (~200 LOC)

[here's](https://vulkan-tutorial.com/code/17_swap_chain_recreation.cp...) a vulkan triangle rendering example code (~1000 LOC)

ye it's fair to say opengl is a bit easier to use ijbol



You’re getting downvoted for some reason, but OpenGL is absolutely easier. It abstracts so much (and for beginners there’s still a ton even with all the abstraction!). No need to think about how to prep pipelines, optimally upload your data, manually synchronize your rendering, and more with OpenGL, unlike Vulkan. The low level nature of Vulkan allows you to eek out every bit of performance, but for indie game developers and the majority of graphics development that doesn’t depend on realtime PBR with giant amounts of data, OpenGL is still immensely useful.

If anything, an OpenGL-like API will naturally be developed on top of Vulkan for the users that don’t care about all that stuff. And once again, I can’t stress this enough, OpenGL is still a lot for beginners. Shaders, geometric transformations, the fixed function pipeline, vertex layouts, shader buffer objects, textures, mip maps, instancing, buffers in general, there’s sooo much to learn and these foundations transcend OpenGL and apply to all graphics rendering. As a beginner, OpenGL allowing me to focus on the higher level details was immensely beneficial for me getting started on my graphics programming journey.



It won't be OpenGL-like, it will probably just be OpenGL https://docs.mesa3d.org/drivers/zink.html


This is a bit misleading. Much of the extra code that you'd have to write in Vulkan to get to first-triangle is just that, a one-time cost. And you can use a third-party library, framework or engine to take care of it. Vulkan merely splits out the hardware-native low level from the library support layer, that were conflated in OpenGL, and lets the latter evolve freely via a third party ecosystem. That's just a sensible choice.


And often those LOC examples use GLFW or some other library to load OpenGL. Loading a Vulkan instance is a walk in the park compared to initializing an OpenGL context, especially on Windows. It's incredibly misleading. If you allowed utility libraries for Vulkan to compare LOC-to-triangle Vulkan would be much closer to OpenGL.


FWIW Metal is actually easier to use than Vulkan in my opinion, as Vulkan is kind of designed to be super flexible and doesn't have as much niceties in it. Either way, OpenGL was simply too high level to be exposed as the direct API of the drivers. It's much better to have a lower level API like Vulkan as the base layer, and then build something like OpenGL on top of Vulkan instead. It maps much better to how GPU hardware works this way. There's a reason why we have a concept of software layers.

It's also not quite true that everyone uses Unity and Unreal. Just look at the Game of the Year nominees from The Game Award 2023. All 6 of them were built using in-house game engines. Among indies there are also still some amount of developers who develop their own engines (e.g. Hades), but it's true that the majority of them will just use an off-the-shelf one.



WGPU is kinda supposed to solve the problem by making a cross platform API more user friendly than Vulkan. The problem with OpenGL is that it is too far from how GPUs work and it's hard to get good performance out of it.


It is hard to get the absolute best performance out of OpenGL but it isn't really hard to get good performance. Unless you're trying to make some sort of seamless open world game with modern AAA level of visual fidelity or trying to do something very out of the ordinary, OpenGL is fine.

A bigger issue you may face is OpenGL driver bugs but AFAIK the main culprit here was AMD and a couple of years ago they improved their OpenGL driver to be much better.

Also at this point OpenGL still has no hardware raytracing extension/API so if you need that you need to use Vulkan (either just for the RT bits with OpenGL interop or switching to it completely). My own 3D engine uses OpenGL and while the performance is perfectly fine, i'm considering switching to Vulkan at some point in the future to have raytracing support.



OpenGL is not deprecated, it is simpler and continues to be used where Vulkan is overkill. Using it for greenfields is a good choice if it covers all your needs (and if you don't mind the stateful render pipeline).


It kind of is, OpenGL 4.6 is the very last version, the red book only covers until OpenGL 4.5, and some hardware vendors are now shipping OpenGL on top of Vulkan or DirectX, instead of providing native OpenGL drivers.

While not officially deprecated, it is standing still and won't get anything newer past 2017 hardware, not even newer extensions are being made available.



It is officially deprecated on all Apple platforms, and has been for five years now.

Whether it will actually stop working anytime soon is a different question; but it is not a supported API.



For context: https://developer.apple.com/documentation/opengles

It is marked as being deprecated as of iOS 12, which came out in September 2018.

Non-ES version was deprecated in aligned macOS version 10.14: https://developer.apple.com/library/archive/documentation/Gr...



> One day, Apple will deprecate opengl 3.3 core, and I guess everybody might end up deprecating it.

And here I am, recalling all the games and programs that failed once OpenGL 2.0 was implemented because they required OpenGL 1.1 or 1.2 but just checked the minor version number... time flies!



My understanding is that one of the primary reasons Vulkan was developed was because OpenGL was not a good model for GPUs, and supporting it prevented people from taking advantage of the hardware in many cases.


> Regrettably, the M1 doesn’t map well to any graphics standard newer than OpenGL ES 3.1. While Vulkan makes some of these features optional, the missing features are required to layer DirectX and OpenGL on top. No existing solution on M1 gets past the OpenGL 4.1 feature set.

I'm very curious to know the performance impact of this, particularly compared to using Metal on macOS. (I'm sure the answer is "it depends", but still.)

It's possible the article answers this question, but I didn't understand most of it. :(



There isn't necessarily much difference between implementing features in driver compute code versus GPU hardware support. Even the "hardware support" is usually implemented in GPU microcode. It often goes through the same silicon. Any feature could hit a performance bottleneck and it's hard to know which feature will bottleneck until you try.


This is for Fedora on the M1. It would be amazing to get this for macOS. What's involved in pulling something like that off?


Ultimately they build command buffers and send them to the GPU. You'd need a way to do that from macOS.

The original Mesa drivers for the M1 GPU were bootstrapped by doing just that, sending command buffers to the AGX driver in macOS using IOKit.

https://rosenzweig.io/blog/asahi-gpu-part-2.html

https://github.com/AsahiLinux/gpu/blob/main/demo/iokit.c

So you'd need a bit more glue in Mesa to get the surfaces from the GPU into something you can composite onto the screen in macOS.



According to the devs, it isn't really possible due to Apple not having a stable public kernel API: https://social.treehouse.systems/@AsahiLinux/111930744188229...


Perhaps already possible via MoltenVK -> Vulkan -> Zink?


Probably needs one or two more layers just to be sure.


I think Apple bans third party kernel drivers? To write a proper Vulkan or OpenGL implementation you need a kernel counterpart for handling the GPU if I understand correctly. That's probably the reason no one bothers implementing native Vulkan for macOS.

But if it's doable with Apple's driver - then not sure.



I find it very amusing that transitioning out of bounds accesses from traps to returning some random data is called “robustness”. Graphics programming certainly is weird.


It makes sense from the perspective of writing graphics drivers, and aligns with Postel's law (also called the robustness principle). GPU drivers are all about making broken applications run, or run faster. Making your GPU drivers strict by default won't fix the systemic problems with the video game industry shipping broken code, it'll just drive away all of your users.

And on hardware where branches are generally painfully expensive, it sounds really useful to have a flag to tell the system to quietly handle edge cases in whatever way is most efficient. I suspect there are a lot of valid use cases for such a mode where the programmer can be reasonably sure that those edge cases will have little or no impact on what the user ends up seeing in the final rendered frame.



The out of bounds accesses don't necessarily trap without the robustness checks, so the robustness is about delivering known results under those goofy cases. So it makes sense when you combine that with the fact that GPUs are pretty against traps in general. Carmack remarked once that it's was a pain to get manufacturers to be into the idea of virtual memory when he was designing megatexture.


This is one of the reasons why C and C++ have a rosy life ahead of themselves on graphics, HPC, HEP, HFT domains.

In domains where "performance trumps safety" culture reigns, talking about other programing languages is like talking to a wall.



This is obviously very exciting, but—why not target Vulkan first? It seems like the more salient target these days and one on top of which we already have an OpenGL implementation.


OpenGL-on-Vulkan compat layers aren't magic. For them to support a given OpenGL feature, an equivalent feature must be supported by the Vulkan driver (often as an extension). That means you can't just implement a baseline Vulkan driver and get OGL 4.6 support for free, you must put in the work to implement all the OGL 4.6 features in your Vulkan driver if you want MESA to translate OGL 4.6 to Vulkan for you.

Plus, this isn't Alyssa's first reverse engineering + OpenGL driver project. I don't know the details but I'd imagine it's much easier and quicker to implement a driver for an API you're used to making drivers for, than to implement a driver for an API you aren't.



They started with targeting older OpenGL to get a basic feature set working first. I guess from there, getting up to a more recent OpenGL was less work than doing a complete Vulkan implementation, and they probably learned a lot about what they'll need to do for Vulkan.


Ok, this makes a lot of sense—OpenGL sort of forms a pathway of incremental support.


Along with that, it's more immediately useful as it's used for desktops and compositers still, so getting a useful environment necessitates it.


I thought something similar, but from their comments, to support OpenGL over Vulkan you need higher versions of Vulkan anyway and it's still a big effort. So they decided to go with (lower versions of) OpenGL first to get something functional sooner.


> How do we break the 4.1 barrier? Without hardware support, new features need new tricks. Geometry shaders, tessellation, and transform feedback become compute shaders. Cull distance becomes a transformed interpolated value. Clip control becomes a vertex shader epilogue. The list goes on.

I wonder how much of this work is in m1 gpu code, versus how much feature-implemented-on-another-festure work could be reused by others.

This feels very similar to what Zink does (runs complex opengl capabilities via a more primitive Vulkan), except there is no Vulkan backend to target for m1. Yet.



More generally, you could execute complex OpenGL or Vulkan on some more-or-less arbitrary combination of CPU soft-rendering and hardware-specific native acceleration support. It would just be a matter of doing the work, and it could be reused across a wide variety of hardware - including perhaps older hardware that may be quite well understood but not usable on its own for modern workloads.


Another upvote, another article I wish I had the knowledge and patience to understand better in context. Still, Alyssa's writeups are a fun read.


Same, I wish I knew more about graphics programming. It seems like such a steep learning curve though so I get discouraged.


Don't be discouraged, modern graphics APIs really are a mess, but you don't need to understand 1/100th of them to get graphics going. Also, this post is more about programming drivers than programming graphics.


Kind of crazy to think that the only reason OpenGL was ever a thing for 3D gaming was because of John Carmack's obsession with using it for Quake II back in the 90s.


John Carmack a couple of years later, back in 2011:

> Speaking to bit-tech for a forthcoming Custom PC feature about the future of OpenGL in PC gaming, Carmack said 'I actually think that Direct3D is a rather better API today.' He also added that 'Microsoft had the courage to continue making significant incompatible changes to improve the API, while OpenGL has been held back by compatibility concerns. Direct3D handles multi-threading better, and newer versions manage state better.'

> It is really just inertia that keeps us on OpenGL at this point,' Carmack told us. He also explained that the developer has no plans to move over to Direct3D, despite its advantages.

From https://www.bit-tech.net/news/gaming/pc/carmack-directx-bett...



Quake is just (a probably small (not wanting to dimish it, of course)) part of the history. SGI and the enormous effort to get compliant implementations on many different systems and architectures are what made OpenGL what it eventually became.


I think both SGI and Quake were absolutely crucial.

Without Quake, OpenGL would have remained an extremely niche thing for professional CAD and modeling software. And Microsoft would have completely owned the 3D gaming API space.

Quake (and Quake 2, and Quake 3, and the many games that licensed those engines) really opened the floodgates in terms of mass market users demanding OpenGL capabilities (or at least a subset of them) from their hardware and drivers.

I'm not sure how to measure this in an objective way, but if the mass market of PC gamers didn't dwarf the professional CAD/modeling market by several orders of magnitude, I will print out my HN posting history and eat it.



Microsoft never owned the 3D gaming API space, SEGA, Sony and Nintendo also have/had their own APIs.


I never heard about SEGA, Sony or Nintendo 3D APIs being used on a PC. I guess somebody somewhere did it, but it's so insignificant.


I never heard about 3D gaming API space being something PC only, maybe in some fancy FOSS circles.


I don’t know that it was the only reason, but Carmack’s push for OpenGL certainly helped. A lot of things related to 3D games are thanks to doom and Quake.


It also helped that the API was actually user friendly compared to the earlier versions of Direct3D.


> A lot of things related to 3D games are thanks to doom and Quake.

Quake sure, but Doom? IIRC Doom is far more like Wolf3D's 2.5D/raycasting than the "true 3D" of Quake, it cpu rendered to a frame buffer with zero hardware acceleration. I find it hard to believe it made any lasting impact on any subsequent 3D rendering APIs.



Quake didn't use hardware acceleration either. It was only the later VQuake and GLQuake releases that did.


I think for the purposes of this discussion "Quake" is acceptable shorthand for GLQuake, Quake 2, Quake 3, all the games that used those engines, etc.


Quake got official 3D accelerated versions like GLQuake and VQuake. The improved visuals and better performance these versions offered drove a lot of early 3D accelerator sales in the consumer space.


Fun fact: the earliest archived OpenGL site was a big "FAST GAMES GRAPHICS" banner with an animated Quake 1 graphic and a menu for other stuff :-P

https://web.archive.org/web/19970707113513/http://www.opengl...





And yet it still got its ass kicked by Direct3D because Microsoft made better stuff. Better API, better tooling, better debuggability.

Honestly, it would've been better to leave OpenGL to the legacy CAD vendors and standardize on Direct3D roundabout 1997 or so.



Ye good ol' Microsoft stiffled OpenGL on Windows, hence open letter https://www.chrishecker.com/OpenGL/Press_Release not to mention insidious thing they did on Fahrenheit (next gen OpenGL+Direct3D, one to rule them all) when they were supposed to be working on it together with SGI. Microsoft did a well job after with it, but they were and are a shit company that made sus maneuvers to make success; Not all of them technical.


Well, except for only working on xbox and windows, which pretty much destroys it as a viable direct target for modern games or apps.

> Honestly, it would've been better to leave OpenGL to the legacy CAD vendors and standardize on Direct3D roundabout 1997 or so.

If you remember what Microsoft was like in those days, the chances of D3D being standardized in a viable way on any platform but windows were about the same chances as an ice cube in hell stands.



Nintendo and Sony 3D APIs are also exlusive to their consoles, people keep forgeting about them.


> a viable direct target for modern games

Aside from PlayStation exclusives, nearly every AAA game in the past 20+ years has targeted Direct3D and HLSL first. Any other backend is a port.



The XBox versions of DirectX aren't exactly compatible (in some pretty significant ways, IIRC)


Technically, also Linux (and probably other Vulkan platforms) with dxvk.


I had no idea this was a thing! Cheers.


There's also VKD3D for dx12!


"Unlike the vendor’s non-conformant 4.1 drivers, our open source Linux drivers are conformant to the latest OpenGL versions, finally promising broad compatibility with modern OpenGL workloads, like Blender, Ryujinx, and Citra."

Looks like apple silicon are currently the best hardware for running linux and linux is the best OS for apple silicon machines.



> Looks like apple silicon are currently the best hardware for running linux and linux is the best OS for apple silicon machines

Blender has Metal support for Apple Silicon macs. The Metal API is better architected (largely due to being more modern and being developed with benefit of hindsight) so all things equal I'd pick the Metal version on Mac.

In case you missed it in the article, the M1 GPU does not natively support OpenGL 4.6. They had to emulate certain features. The blog post goes into some of the performance compromises that were necessary to make the full OpenGL emulation work properly. Absolutely a good compromise if you're on Linux and need to run a modern OpenGL app, but if your only goal is to run Blender as well as possible then you'd want to stick to macOS.

Ryujinx is a Nintendo Switch emulator. They added support for Apple Silicon Macs a couple years ago and have been improving since then: https://blog.ryujinx.org/the-impossible-port-macos/

Linux on Apple hardware has come a long way due to some incredible feats of engineering, but it's far from perfect. Calling it the "best OS for Apple Silicon" is a huuuuge reach.

It's great if you need to run Linux for day to day operations, though.



Right, Blender Cycles for example can run on Metal, but neither on OpenGL or Vulkan. So while it's nice to have a working OpenGL, it depends if your workflow requires OpenGL apps.


I would be very surprised, if Blender Cycles ever ran on top of OpenGL or Vulkan other than using OpenGL or Vulkan as a loader for compute shaders.

That's why it is running as CUDA/OptiX/HIP/oneAPI on Windows and Linux.



Strange the article doesn't use the word "Apple" once, and instead awkwardly uses "the vendor" to refer to Apple.


It's inline with how the linux/floss community refers to hardware vendors in general, even if Apple is a unique case in many circumstances.


Didn't CentOS use "the vendor" instead of RHEL like this too?


I would love for my employer to support that config at work. We have quite lovely Linux dev laptops, but the battery life of the M1/M2 machines in the IT shop is definitely enticing, and Asahi Linux gets closer to MacOS in that regard than you might think given the relative maturity and optimization.


It definitely isn’t ready for use as a daily driver. There are lots of bits missing (see below for an example) and power management isn’t great compared to macOS.


How so? I'm daily driving it as my only machine since November.sure there are missing features but none that are really essential for most people.


You and I have very different work environments for you to be able to claim that microphones aren't essential for most people.


> Looks like apple silicon are currently the best hardware for running linux

I wonder if this effort to run Linux on apple silicon will continue if snapdragon X laptops become mainstream.



I think it will. One of the main issues with desktop linux is still broad hardware support. Random crap like fingerprint readers or Wi-Fi cards still don't work on certain machines. By having a very constrained set of hardware options, it makes it a lot easier to support. The snapdragon devices are also starting way behind.. both the Surface X and Lenovo X13S snapdragon devices exist today but Linux support isn't close to Asahi.


And Sodium! (For minecraft)


Still waiting for some hardware support and hardware video decoding.


Hardware video decoding is well on the way: https://github.com/eiln/avd


apparently a lot of hardware is still not properly supported, like speakers, microphones and energy saving


Here’s the list of very detailed support status: speakers are generally supported but microphones are not. They have a driver for some energy savings but it has some rough edges.

https://github.com/AsahiLinux/docs/wiki/Feature-Support



...and you came to that conclusion because of OpenGL 4.6 - something that several other platforms enjoyed under GNU/Linux with FLOSS drivers for more than half a decade now?






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com