在GPU上模拟行星:第一部分
Simulating a Planet on the GPU: Part 1 (2022)

原始链接: https://www.patrickcelentano.com/blog/planet-sim-part-1

由于基于CPU的多边形方法在模拟构造板块时面临性能问题,作者转向GPU寻找解决方案。最初尝试优化CPU代码——通过分块、并行化和内存重排——均未成功,表明根本问题在于巨大的计算量。 尽管在图形和CUDA之外的经验有限,作者探索了计算着色器,认识到它们具有大规模并行处理的潜力。他们成功地使用立方体贴图实现了板块碰撞、俯冲和海底扩张的模拟,但难以实现逼真的变形。 虽然计算着色器方法显示出希望,但作者意识到需要一种新技术来准确模拟板块边界发生的变形,这标志着他们追求逼真构造模拟的下一步。

黑客新闻 新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 在GPU上模拟行星:第一部分 (2022) (patrickcelentano.com) 7点 由 Doches 1小时前 | 隐藏 | 过去 | 收藏 | 讨论 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请YC | 联系 搜索:
相关文章

原文

Frustrated with the poor performance of my polygonal approach, I began to research Unity’s performance optimization tools. I chunked up and parallelized the Voronoi tessellation to no avail. I tried rearranging memory to no avail. Clearly, something about doing this amount of math on the CPU was beyond current processing capabilities… but what about the GPU?

Like most programmers, I’d heard of the legendary power of GPUs, but had only really harnessed it via graphics programming or CUDA (for my Music ex Machina project). While I had some experience writing conventional shaders, learning how to write compute shaders seemed like a massive undertaking… but what option did I have? I had no room left on the CPU.

Put simply, compute shaders are capable of applying a GPU’s heavily-parallelized workflow to arbitrary data, meaning I could simulate a world full of tectonic plates one “pixel” at a time. Once I figured out how to represent potentially world-spanning plates as cubemaps, I managed to create a neat compute shader-based simulation with plates colliding, subducting, and emerging from seafloor spreading… but never deforming.

While I liked the direction this adventure in compute shaders had taken me, I needed some new technique which could realistically deform crust at convergent plate boundaries.

联系我们 contact @ memedata.com