(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38153573

对于整体游戏性能和优化而言,将 Web 技术用于 UI 通常是一个不好的信号,因为与在游戏中运行 Web 内容相关的所有额外开销和绘制调用。 然而,在这种特殊情况下,由于多种因素,它可能不一定是一个巨大的问题。 首先,文章特别提到渲染 UI 所需的时间部分被认为是“无关紧要的时间”,这表明它可以忽略不计,并且不会显着影响整体游戏性能。 此外,文章指出,虽然单个 UI 组件每次更新可能需要数百或数千次绘制调用,但考虑到整个游戏会话中渲染的帧总数,这些数字相对较低。 最终,在某些情况下,在 UI 中使用 Web 技术肯定会影响性能和优化,特别是如果 UI 的其他组件没有得到适当的优化和简化,或者它们以不太优化的方式实现。 但在这种情况下,UI 中网络技术的使用似乎并不是游戏中出现的潜在性能问题的根本原因。 相反,它似乎只是许多其他潜在改进领域之一。 总体而言,虽然在 UI 中使用网络技术肯定会影响性能和优化,但在这种特殊情况下,它似乎并不是与观察到的问题相关的主要因素。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Why Cities: Skylines 2 performs poorly (paavo.me)
754 points by paavohtl 14 hours ago | hide | past | favorite | 466 comments










Hey all: this is an interesting article. Can we please discuss what's specifically interesting here?

Threads like this tend to become occasions for responding generically to stuff-about-$THING (in this case, the game), or stuff-about-$RELATED (in this case, the framework), or stuff-about-$COMPARABLE. There's nothing wrong with those in principle but each step into genericness makes discussions shallower and less interesting. That's why the site guidelines include "Avoid generic tangents" - https://news.ycombinator.com/newsguidelines.html



"And the reason why the game has its own culling implementation instead of using Unity’s built in solution because Colossal Order had to implement quite a lot of the graphics side themselves because Unity’s integration between DOTS and HDRP is still very much a work in progress and arguably unsuitable for most actual games."

This sadly tracks with my own experiences with Unity's tooling, where DOTS did ship but its implementation rots on the vine like every other tool they acquired. The company is woefully mismanaged, its been mismanaged, and given the very public pricing incident from a few weeks back, they aren't focusing on improvements to the engine, but on any way to scrap money from its users.

Bevy's ECS implementation is really good, and I want to see it succeed here, in addition to Godex.



>Bevy's ECS implementation is really good, and I want to see it succeed here, in addition to Godex.

Godex is ultimately putting lipstick on a pig. it can improve performance a bit, but ECS isn't some magical optimazation to slap on as a plug-in. cache coherency in the gameplay layer can't fix engine level bottlenecks.



I don't understand how Unity burns through up to a billion in revenue every single year, yet their engine still feels so half-baked and unpolished. Where's all that money going?


Lots of cash can be burned in meetings and by managers of managers. I'm sure my case is not unique here but, meetings that cost $100k are common at $BigCo. And there dozens of those per day.


You have 5-hour long meetings with 40 people each costing $500/hr?


Let me introduce you to the world of PI planning where two days of every quarter are spent with approximately 100 people (developers, program managers, etc) for each line of business to plan out the next quarter of work - even if 90% of it is continuing the work from the last quarter...


Two days sounds lucky. I've been in SAFE plans where it took an entire week to plan things, it was unreal. Like you I met people I've never seen or heard of from the company all giving their input on software I'm creating. I'd never see them again until the next SAFE planning, only this time it felt like 1 SWE out of maybe 15 (yes there were only 3 teams of 5 devs, put 100+ in these SAFE meetings) contributed anything meaningful.

Company wasted so much money, then the org shut down for spending $500 per customer per year in maintenance (this was a health insurance company) whereas the main company would only spend $70 per customer per year. You'd think leadership of the org would get fired for this but they were rewarded with other positions within the company to do the same thing.

Unreal. Why do shareholder's put up with this? I guess the healthcare monopoly is the only reason to.



Half a day is spent on collecting and analyzing the confidence vote. The results of the confidence vote have no impact on the plan except as a baseline for the next PI.


… and inevitably three weeks later priorities change and it all goes out the window.


AWS has (or had) a weekly 2-hour operational status meeting. It has over 200 people in it, a mix of distinguished and principal engineers, very senior managers, etc. That meeting easily crossed the $100k-per-pop threshold. Worth it, though, that was actually a pretty great institution.


Sometimes the meet has folk that are even more expensive (not me). Sometimes theres more than 50. Four hours of meetings, six hours of wall-clock

Edit: some companies have >20k employees. My first years at MS (late 90s) had these "war room" like 2x a week, 50 people, multiple VPs in the room, 2h. But there were other groups doing their war rooms too.



Agile shaman took the wheel


Agile done properly is literally the opposite of this. Big Important Exec spouting Agile terms they don't understand and cluelessly forcing top-down crap on the teams, more likely. Normalize your story points, prole! Daddy needs metrics!


DOTS is homegrown isn’t it?


Right, it is. I intended to talk about Bolt and ProBuilder, tools they bought, added to the engine, and then left to rot.

Although to build DOTS they did poach a lot of ECS and Data-Oriented folks like Mike Acton, who left earlier this year.



How to Receive EFT From Other Bank? Answer="Now anyone can send you EFT to your CellFin acc Name on your CellFin card

Account Number:

Your CellFin card account number. It is 13 digits and starts with 358.

Branch:

Local Office or Truncation Branch

Routing Number:

125270007



Unity is a clown engine. I remember some guy benchmarked DOTS, against plain old C++ code with OpenGL, just like your dad used to write, and it was no contest, DOTS couldn't keep up.

You have to remember, Unity usually sets the bar very low for themselves, comparing their ancient Mono implementation (they still use the Boehm GC, written in the 1980s!), and when a shiny new performance 'fix' like Burst/DOTS drops, they proudly proclaim how much faster they managed to make things, never mind a modern .NET implementation, like Microsoft's CoreCLR would achieve the same performance without any proprietary tech, without any code modification.



Huh, I never knew Unity used its own CLR implementation. Any idea why?


I think its implementation is a fork of a 2006-era Mono (a clean room reverse engineered .NET implementation). After a few releases, they had a falling out with the Mono team over licensing, meaning they got stuck on an old release.

Besides that, they made a bunch of proprietary changes to Mono to make it run on their engine, and to be able to export to literally any platform under the sun.

A lot of platforms, like iOS and consoles have (or had) a strict no-JIT policy, so they needed to come up with statically compiling code to said platforms. One of the methods they used was IL2CCP, which turned .NET bytecode, into horrible looking C++, full of goto-s, weird labels and structs getting passed around.

Considering some platforms had limitations like you had to compile the game solely with the C++ compiler the platform supplied, not sure if they had a better solution, but it's still horribly hacky.

They've been manually syncing up changes from the more recent versions, but I can't really tell, at what pace.

But the thing is, even the official Mono has never really kept pace with MS's implementation, and recently, after the acquisition, Mono was dropped by MS in favor of the CoreCLR.



Unity is older than the Core runtime. They had to use Mono as their base.


Tip for those wanting to play it: change resolution scaling from dynamic to constant.

I have a 3080 and it basically moves it from "unplayable 10fps in the main menu" to "works just fine, no issues in game" with medium-high graphics.



That’s not what fixes it. Disable motion blur and depth of field. Depth of field kills the menu in particular.


Or off, entirely. On my 3080 it seems to cause lots of rendering artifacts.


I really appreciate the writing style:

> This pass is surprisingly heavy as it takes about 8.2 milliseconds, or roughly about far too long, ...



I wish more people would write like this, sprinkling little bits of humor in a somewhat serious piece


You might then like RockPaperShotgun and The Register


Thanks!


For a bit of reference, a full frame of Crysis (benchmark scene) was around 300k vertices or triangles (memory is fuzzy), so 3-10 log piles depending on which way my memory is off and how bad the vertex/triangle ratio is in each.


Author here: I never bothered counting the total vertices used per frame because I couldn't figure out an easy way to do it in Renderdoc. However someone on Reddit measured the total vertex count with ReShade and it can apparently reach hundreds of millions and up to 1 billion vertices in closeups in large cities.

Edit: Checked the vert & poly counts with Renderdoc. The example scene in the article processes 121 million vertices and over 40 million triangles.



“If you’ve used more triangles than there are pixels on the screen, you’ve messed up.”

I call this kind of thing the “kilobyte rule” because a similar common problem is shoving tens of megabytes of JSON down the wire to display about a kilobyte of text in a browser.



I'd think today's problem is tens of megabytes of JavaScript to display a couple kilobytes of text.


Sounds right. I remember seeing "1M Triangles" in the performance HUD and thinking, that's crazy, a million triangles. Probably very few shared vertices once you account for edge splits, billboards, etc.


So the game uses extremely detailed models, then fails to have an intelligent way of abstracting/culling away those things that will never make it into pixels, is that a fair summery?

Also, does that mean that easy fixes are available or is this so core that solutions would require going back to the drawing board?



> This mesh of a pile of logs is similarly only used in the shadow rendering pass, and features over 100K vertices.

But… why?



Because it is one of the 1,000,000 things to pay attention to in game development. Someone or some software probably just made a mistake in setting up its LOD. Or some dynamic LODding code didn't properly cull the LOD0 mesh. Or that code couldn't be finished in time. Or it was something else.

It's completely normal in AAA games to have a few imperfect and in-optimal things. Budgets are always limiting, and development times short. Plus, it's a hit-driven industry where payoff is not guaranteed. There are some things you can do (which are usually management-related and not dev-related) to make the game a success, but estimated bookings are rarely on-point. So trade-offs have to be made to de-risk - corners cut where possible, the most expensive part - development - de-prioritized. These are much bigger trade-offs than a single mesh being unoptimized. A single mesh is nothing.

It's a fun fact that this mesh is LOD0, and so is the teeth mesh. But that alone doesn't tank the performance of the game and is probably unlikely to be addressed in lieu of actual performance fixes. The fixation on these meshes in the thread is kind of excessive.

A lot of these comments are quite galvanized so I don't want to add to that - just giving more context.



> Because it is one of the 1,000,000 things to pay attention to in game development.

Finding objects with ridiculous triangle counts is one of the easiest things to do when you have known performance issues.

If they didn't have time to do the first, easiest chunk of the work, then something was far more dysfunctional than "it's one of a million things to deal with".



This is on par with: “Sure, there’s a raging fire in that corner of the office, but it’s just one of a million things I had to deal with that day. That’s why I didn’t call the fire department, okay!?”

Staying on top of poly count budgets is like game dev 101.



Happen more than you think, despite the absurd metaphor. Maybe there was an inferno in the neighbohood and no time to worry about the fire in the corner of the apartment. Maybe your publisher doesn't care if your house burns down but wants to make money now for their own earnings.

I don't think it's uncommon even outside of gaming for modern software to have these sort of "hidden fires". Games just get a lot more conversation and a lot of niche problems as a 3D real time application.



But gamedev usually means, all new faces teams, as the old one quits in lockstep. So it's very likely a bunch of youngsters running around yelling "premature optimization" is the death of all good things. Mistaking that with needing no optimization plan at all until game is almost done.


> It's completely normal in AAA games to have a few imperfect and in-optimal things.

No, mate, stop. The state of C:S2 is well beyond anything we should accept as "completely normal". It's a defective product that should not have been released. Stop normalising this crap.



you're missing the forest for the badly rendered tree in some E3 showcase that everyone nitpicked to death. Can we not treat this discussion like a Reddit rant, please?

Gamedev has many shortcuts, and some things simply fall through the cracks. Some are caught and fixed, some are caught and not fixed, some just aren't caught at all. I imagine it's the 2nd case here; There's a unfortunate large amount of bgs these days caught by QA but not given time to fix before publisher mandates.



Good grief, I have a mid tier AMD card and I'm having a blast with almost 40 hours in the game already. Can we quit with the "defective" propaganda?

The game runs fine and it's really fun. This "controversy" really drives home for me how detached from reality online discourse often is



The most common GPU on steam stats is a 3060. The AMD 7800/7700 do 90% and 70% better on benchmarks. So if you have either of those, you're getting nearly twice the FPS that the most common steam user would see.


I'm running on a laptop 2070 and doing just fine. Calling it defective is just blatantly lying.

It should be better optimized, but calling it "defective" does nothing but make people dismiss your comment.



It works on my machine


They pay for it they accept it, it's that simple.

All this crying when they could have simply returned the product or not buy it at all. Colossal Order themselves warned about the performance before it was even released. There were plenty of reviews that said the same thing.

So to get up in arms about the performance means they are just being exceptionally stupid and entitled, and they should just grow up and stop crying over their toys.

Colossal Order can release whatever garbage they want to. And you can choose to buy it or not, or even buy it and return it (fight for better return policies if you want something positive).



What trade off would you choose between fixing a performance issue which is somewhat avoidable and fixing a game breaking crash bug? Because the latter gets priority and there’s never enough time to fix all of those before launch, let alone work your way up to closing out every last frame rate drop.


I think if you get 100k poly models in your (city builder) game in the first place (for any but the most amazing wonders) your process has failed spectacularly at some point.


Their point is that specific mesh could be left alone and the game still be playable as long as other issues were fixed.

Chances are a nearly complete version of C:S2 was playable and they “broke it” at the last minute by not finishing the optimization process.



That's speculation based on nothing but vibes.


It’s speculation based on these mesh sizes being so arbitrary in the game development process and what’s broken being unnecessarily window dressing for gameplay. It’s the kind of thing that could be delayed to the last minute with some simple placeholder.

“Now you might say that these are just cherry-picked examples, and that modern hardware handles models like these just fine. And you would be broadly correct in that, but the problem is that all of these relatively small costs start to add up, especially in a city builder where one unoptimized model might get rendered a few hundred times in a single frame. Rasterizing tens of thousands of polygons per instance per frame and literally not affecting a single pixel is just wasteful, whether or not the hardware can handle it. The issues are luckily quite easy to fix, both by creating more LOD variants and by improving the culling system. It will take some time though, and it remains to be seen if CO and Paradox want to invest that time, especially if it involves going through most of the game’s assets and fixing them one by one.”

IE: The the game would have looked nearly complete even if none of these meshes where in use. Meanwhile the buildings themselves are optimized.



Agreed.

This really smacks of late asset delivery, which probably happened because delivery dates to the rest of the dev team kept being bumped.

Then, by the time the assets were finally delivered, it was recognized they weren't optimized (as expected), but there wasn't any time to fix that.

Or the same thing with an internal asset team.

Although honestly, you'd think after seeing the performance numbers they would have implemented an "above X zoom level, people / anything smaller than this are invisible" sledgehammer fix, until LOD could be properly addressed.

Better to deal with pop-in than have everything be unexpectedly slow.



You're right that this kind of stuff is sort of par for the course. As in other cases, it's indicative of (IMO) a bad development process that they didn't budget the time to polish before shipping. I save my games budget for stuff that is "done when it's done", not rushed out, mostly out of principle.

If you aggressively min-max development cost & time vs features, there are big external costs in terms of waste (poorly performing software carries an energy and hardware cost,) end-user frustration, stress on workers, etc., which is how I justify voting with my money against such things.



I get that you can leave a bunch of things unoptimized, as long as it works fine.

What I don't understand is - how did they not notice that the performances was horrible even high end hardware? How did they not decide to take the time to investigate the performances issues and find the causes we're talking about now?



So you know the saying “premature optimization is the root of all evil?” Producers love that statement because it removes half of the complaints around work being rushed.

Optimization is not done throughout the process and later there’s not enough time. Assets are made with bad topology and it would take time to redo them. Or it would take time to write a tool that retopologizes them automatically.

What I’m saying is by the time it’s “time” to optimize, there’s not enough time to optimize. It happens very commonly. But the alternative is taking development slower to do things right. And you simply don’t get investment for schedules like that in most companies. Not to mention that it’s goddamn hard to do when the execs lay off people, ask them to RTO, and induce serious attrition otherwise. Sometimes the team just can’t settle into a good process as people leave it too much. So you’re between a rock and a hard place — on the one hand: attrition and low morals, on the other hand: a tight schedule. This doesn’t apply to Colossal Order from my knowledge, but it does apply to many AAAs.

There is a problem at the root of this - extremely over-ambitious production schedules as norm. Most other things are symptoms. Most of what I described is a symptom.



Except there really isn’t a better product to back up these issues. There were some important improvements made to the gameplay, traffic system, certain things were reworked, nicer graphics etc. It feels like an iteration and not a ground-breaking game rhat would justify the performance issues we’re seeing.


The design was iterated, but the game assets are redone almost completely, and the systems appear largely reworked, too. This is evident when you play the game.

The scope of work done for this game was exceptionally large for a company with 40 employees, assuming it was done within the usual AAA timeframe.



I _guarantee_ they knew about it.

They even posted on social media 1 week before launch warning people to expect lower than expected performance, and raised the system requirements.

If companies have to decide between prioritising features that they've advertised, show stopper bugs, and performance, guess which one always takes the back seat :)



> Because it is one of the 1,000,000 things to pay attention to in game development.

This is a cop-out. This doesn't seem like an oversight but rather blatant incompetence. You don't just "not pay attention" to this.



you call it incompetence, devs call it "publishers told us to ship now". You'd think a technical community like this would sympathize with such mandates given across the industry.


Well, because when you’re modeling a pile of logs, each additional log you add doubles the number of vertices.

This is a well known property of log scaling.



I’m too dumb to know if you’re making a math joke or if this is a real 3D modeling thing.

Or both?



It’s a joke


It could have been like a hundred vertices and a clever normal map. Just insane.


The studio that made this has like 30 devs.


This doesn't fly with a one man team and not with a 1000. It's just badly done, there's no sugarcoating. Those meshes should never end up in the game files.


>This doesn't fly with a one man team and not with a 1000

sounds like someone never worked on a 1000 dev team. random quirks either go unnoticed or are de-prioritized all the time. Most are minor, more and more moderate to major ones are getting through. That's definitely a publisher issue.



Thank you! This is what I was getting at in another comment. This isn't just a case of "oh no, I forgot to switch a button".


I'm solodeveloping a game, and there's no fucking way a 100,000 vert mesh is getting in the game without a LOD. My game is running on the Quest 2 at 72 fps stable


sure, it's easy to catch major hiccups when one person has 100% knowledge base. Not so much when 30 devs each have different responsibilities and the optimizer guy is drowining in other fires.


Hey, ya gotta have logs /s


I loved every bit of this post, especially the final few sentences. Thank you.


Making a AAA game sounds horrible. And making one in Unity sounds doubly so. All of these things sound like fixable issues. They'll likely be fixed. Hopefully the developers made these oversights because they focused on what makes the game worth fixing to begin with: that the gameplay is fun. In many recent games I feel developers have focused on the wrong things and totally forgot the core, meaning if they fix the bugs - the game is still quite hollow (cough, Starfield). A game like this should of ocourse have a continous perf process and if it doesn't run ok (min 30fps on median hardware 60 on enthusiast hardware for example) then it just shouldn't ship. I wish more studios would stop having crunchtime for meaningless deadlines such as holiday seasons. Someone has said "it's ok to be just 10fps on beefy hardware, we can fix that later, let's ship it now".


> then it just shouldn't ship

Part of the problem is that AAA is just (IMO) too big and expensive. Devs might actually have to ship a broken game around holiday time just to get enough sales to survive.

And the other extreme can be dangerous too, like how Mass Effect Andromeda's development dragged on forever, and EA let it happen because its such a golden IP.

I think the ultimate solution is to just scale down most studios a little bit, so the studio and publisher can afford to delay. Medium sized studios are the sweetspot, especially going forward with GenAI.



It's not like this is unique to AAA. Unless gamedev is your hobby and not full-time job you simply have no choice other than ship game and try to fix it.

Any small indie studio have to deal with it.



If a studio downsizes, they might be able to survive making lower budget games. And while it's true that if every studio does this, the industry would get better, there's incentive for _one_ studio to pump up their funding and make a super-high-fidelity game, and grab all marketshare.

This makes it a 'race to the bottom' style (or a race to the top?) competition, where higher funding gets you more marketshare, but only against lower funded studios. It's akin to advertising budgets. Mostly a zero sum game in the end.



> super-high-fidelity game, and grab all marketshare

This is not necessarily true any more. I think smaller studios chipping away for a long time are making better games than most big ones.



EA were the culprit for those delays by forcing bioware to use frostbite. Blame rests solidly with some synergies savings suitsuperman.


>Someone has said "it's ok to be just 10fps on beefy hardware, we can fix that later, let's ship it now".

Well, yes, because it doesn't matter. It is on the top seller list on Steam. I agree with you, but we can discuss fixes till our fingers bleed. In the end, the problem is capitalism.

https://store.steampowered.com/search/?supportedlang=english...



I've learnt that (initial) success of a sequel is 95% because of its prequel. True performance may be seen if it has another sequel, or DLC, or heck, another 3 months.


That's the benefit about continual updating games. By the time CS3 is ready people won't remember CS2 2023 but whatever 3-6 years of updates does to CS2. For a modern example: who's still complaining about Cyberpunk in 2023?

It's not like Sonic 2006 that is forever broken, sold decently at launch and then cratered the series for the next decade to come.



There's still a risk of the game cratering completely like Imperator, although it's not a sequel to an established title. Sometimes I'm still worried how Victoria 3 's fate would be.


That’s still not a sufficient condition for success. Longer term the studio brand will be hurt, not least.


> In the end, the problem is capitalism.

Is this sarcasm? I’m asking seriously. If not, then how is a poorly running game the result of capitalism, and what is the alternative economic model that would produce only high-performance / efficient games?



I don't think it's sarcastic. The game runs poorly because it needed more time for optimizations. It doesn't get more time for optimiztions because the publisher said it needed to ship now. The publisher can say that it needs to ship because they can advertise for good launch sales, because the a large portion of the customer base will buy it at launch as long as there aren't obvious showstoppers.

It's a bit trite to sum that down to "capitalism", but sure. it's the underlying societal buzzword issue



> In the end, the problem is capitalism.

surely, it is the underlying economic system ruining the art of modern video game development!



The problem is European capitalism, as the game developer here is Finnish and publisher is Swedish. Important clarification since you seem very invested in calling out the US in every other post of yours, so I'm sure you appreciate the calling out of systemic European failures here.


The Soviet Union produced some of the best video games!!


I spent 40 minutes trying to eke out more than a handful of fps on an empty map with the resolution set at 1080p with Proton Experimental. I gave up and got a refund, I'll try again if they fix the awful performance.

I got a tremendous amount of enjoyment out of the first instalment of the game, it's a big bummer that I can't give this one a go



It's pretty playable on GeForce Now, for what it's worth. Still a big laggy, but I was able to play for many hours without major issues... just the occasionally annoying but livable stutter.


GeForce Now has been amazing as a mac only user


Same.

I have a M2 Max and GFN is much much easier than trying to set something up with GPT (Game Porting Toolkit) and Whisky, and much faster & quieter too. An RTX 4080 running in their data center means no local heat and noise.



Yes because you have no other options.


There's lots of options? GPT, WINE Crossover, Luna, Boosteroid, Shadow.tech... none of them run as well as GeForce Now, though. Or a dedicated gaming PC.


Dosent really count as it is not rendering on your machine... ofc its good there.


So? That's even better. Doesn't use my battery life or create noise & heat. Netflix isn't run on my machine either.


Sure, but then it does not have any relevance to the article.


I wasn't claimed to be relevant to the article, it was a reply to "it's a big bummer that I can't give this one a go" with a suggestion of how they could play it.


Wasn't supposed to challenge anything in the article. Just an option for those who are struggling to play it on their current hardware.

Even with GFN it's laggy and stutters. Totally agree with the article.



It just uses natural resources to outfit and power data center stuff to create heat and noise somewhere further away. Netflix... is fairly efficient, though being on-demand, perhaps much less so than broadcast TV.


Yes, but better that, in a shared environment with center-wide cooling and such, then each individual household needing to do that on their own.

Also way fewer cards needed this way, with users being able to share cards through the day instead of each needing their own.

Basically mainframes all over again :)



> Yes, but better that, in a shared environment with center-wide cooling and such, then each individual household needing to do that on their own.

I don't think this actually tracks, unless the heat is actually being put to use. You don't need HVACs when you have the machines distributed.



The fix fits into a tweet -> https://twitter.com/ColossalOrder/status/1716883884724322795

> If you're having issues with performance, we recommend you reduce screen resolution to 1080p, disable Depth of Field and Volumetrics, and reduce Global Illumination while we work on solving the issues affecting performance.

This is all I had to do to get smooth performance on an AMD Radeon RX 5700 XT



Try not fluoridating the water, defunding the dentistry college, and subsidizing sugar, so everyone's teeth fall out. Runs much faster then!

[Chill: it's a tooth joke, not a conspiracy theory.]



Water is not fluoridated in most of the world, that’s one of many weird things in the US


DOTS is the brain child of Mike Action. See his 2014 CppCon "Data-Oriented Design and C++" [1]. But Mike has left Unity, according to his twitter.

[1] https://www.youtube.com/watch?v=rX0ItVEVjHc



Despite the original post talking about DOTS rough edges, I didn't see anything in that article that actually suggested DOTS was the cause: that would cause CPU overhead, but it seems like they simply have a bunch of massively over-detailed geometry, and never implemented any LOD system.

Maybe they could have gotten away with this with UE5's Nanite, but that much excessive geometry would have brought everything else to its knees.



The author's point is that poor support for DOTS meant the devs had to roll their own culling implementation which they screwed up.


> Maybe they could have gotten away with this with UE5's Nanite

Exactly.

If unity actually delivered a workable graphics pipeline (for the DOTS/ECS stack, or at all keeping up with what UE seems to be doing) these things probably wouldn't be an issue.



main issue is that DOTS and the whole HDRP/URP stuff started at about the same time, but the goals were completely different. So it would have been nearly impossible to get them working together while DOTS was a moving target. Devs already have multiple breaking updates from the alpha versions of DOTS, an entire GPU pipeline sure wasn't going to rely on that.

>Unity has a package called Entities Graphics

Well that's news to me. Which means that package probably isn't much older than a year. Definitely way too late for game that far in production to use.

oh, so they rebranded the hybrid renderer. That makes a lot more sense: https://forum.unity.com/threads/hybrid-rendering-package-bec...

I'm fairly certain the hybrid renderer was not ready for production.



DOTS/ECS has nothing to do with geometry LODs. Those are purely optimizing CPU systems.

Even if DOTS was perfect, the GPU would still be entirely geometry throughput bottlenecked.

Yes, UE5 has a large competitive advantage today for high-geometry content. But that wasn’t something Unity claimed could be automatically solved (so Unity is in the same position as every other engine in existence apart from UE5).

The developer should have been aware from the beginning of the need for geometry LOD: it is a city building game! The entire point is to position the camera far away from a massive number of objects!



To quote from the blog post:

> Unity has a package called Entities Graphics, but surprisingly Cities: Skylines 2 doesn’t seem to use that. The reason might be its relative immaturity and its limited set of supported rendering features

I'd hazard a guess their implementation of whatever bridge between ECS and rendering is not capable of LODs currently (for whatever reason). I doubt they simply forgot to slap on the standard Unity component for LODs during development, there's got to be a bigger roadblock here

Edit: The non-presence of lod'ed models in the build does not necessarily mean they don't exist. Unity builds will usually not include assets that aren't referenced, so they may well exists, just waiting to be used.



Oh sure. I did not mean to imply that. Sorry.


The same Mike Action that claimed that 30 fps games sold better than 60 fps games using a very questionable data set (excluding sports games etc).


Just wanna give a shout out to the brief mention in the article of Anno 1800. Quite possibly one of the best games I've ever played.


I hope these issues come from the game being rushed and not from a lack of rendering expertise.

Luckily it seems like there are pretty simple reasons for the poor performance so I'm hopeful they can at least do something even if they don't have a ton of rendering expertise.



I think the guess in the article is pretty close to the truth, I've seen stuff like that happen countless of times. You make a bet on a early technology (Unity DOTS + ECS in this case) which gives you a lot of benefits but also, it's immature enough that you get a bunch of additional work to do, and you barely have time to get everything in place before publisher forces you to follow the initial deadline.


100,000 vertices for pile of logs isn't really a bad bet on tech, though. That is just piling vastly more onto any tech stack than it can handle, with nobody having the time or the political okay to do a perf pass through the code and put all these ideas on a diet.

But that means that everything is solvable. There's no need in this game for 100,000 vertices for a logpile, so that should be a relatively straightforward task to fix. And someone can rip out all the teeth and put "Principal Tooth Extraction Engineer" on their resume.



> 100,000 vertices for pile of logs isn't really a bad bet on tech, though. That is just piling vastly more onto any tech stack than it can handle, with nobody having the time or the political okay to do a perf pass through the code and put all these ideas on a diet.

I can easily see this happening though.

Artist starts making assets, asks "What's my budget for each model" and engineering/managers reply with "Do whatever you want, we'll automatically create different LODs later" and the day gold master is being done, the LOD system still isn't in place so the call gets made to just ship what they have, otherwise publisher deadline will be missed.



That sounds like exactly what happened. I've been in that position many times in games I've worked on and seen it happen.


Anyone in management or engineering who tells an artist they have no texture or mesh budget at all gets exactly what's coming to them.


That's essentially what Epic is trying to tell devs these days. Don't know if it will ever truly live up to such lofty goals, though.


This is my favorite video using the poor rendering in the game to make cool art effects. Full disclosure there may be an artistic license and post processing but it’s hilarious you can get these visuals in a modern game.

https://www.youtube.com/watch?v=FQ13OFmRF-4



Wrong video? I think that one is using Mirror's Edge Catalyst


They are referring to the video's focus on low-detail game assets that are only rendered in the distance. Though apparently something the game in TFA sorely lacks.


That right there is the city from the original Mirror's Edge (2008). I can tell from having played it a lot. And what a great idea for a music video.


i'm sorry was he benchmarking this at 5120x1440? like seriously do people really game at that resolution?


To me, that seems like an absolutely amazing resolution for many games. Very rarely do you need more pixels in the vertical direction (most games will just end up rendering sky or ground) when left to right are all you really need.

That resolution is still fewer pixels than a simple 4k display, which modern games seem to drive at 60fps quite regularly if you buy beefy hardware. A graphical gore fest like Doom runs at over 100fps on 4k on the card the author has, so a mostly static city builder should also operate fine at a resolution like that.



Yeah, it's referred to as super-ultrawide. There are a reasonable number of monitors out there with this resolution.

It's more for race/flight sim type stuff IMO but if you got it, why not play everything at that res provided it performs well.



That's two 1440p 16:9 monitors next to each other


He is explicitly not benchmarking, just exploring the render pipeline.


That's the best resolution, it's the one of 49" 32:9 ultrawide monitors like the Samsung Odyssey Neo G9.

Look at images of it from the top, that'll help you understand how immersive it is.



Is there any Paradox game that doesn’t have lots of obvious bugs and terrible UI at release? And only the former gets somewhat addressed over time.

I really wonder how they develop at that place. And what kind of QS they have.I think even applying a crude pareto would improve their games a lot.

Edit: I stand corrected. I wasn't aware that Paradox is also a publisher and even such a big company (over 600 employees!). Still makes you wonder how they go about their business.



I mean unrelated since Paradox is the publisher, but Gauntlet ran like greased lightning, Helldivers ran like greased lightning, Magicka 2 ran like greased lightning…

…of course those were all built on Dragonfly/Bitsquid instead of Unity so that might be a clue about where the issue lies.



>Still makes you wonder how they go about their business

In a way that sent them straight to the top sellers list on Steam. Sadly, today, it just doesn't matter.

Edit: spelling



Paradox just doesn't do testing. The CEO even admitted it.


> Is there any Paradox game that doesn’t have lots of obvious bugs and terrible UI at release?

It's not a Paradox game, they're the publisher. Colossal Order is the developer.

It's a small developer out of Finland, 30-50 employees.



It has or will have twelve dozen DLCs or more, so it's a Paradox game.


Svea Rike II was pretty bug free, but it was released quite some time ago...


Most of them.


Unity has been stalling on it's DOTS and network stack re-implementation for like 5 years now.

There is no excuse other than leadership are cashing the checks and squeezing the juice out of the company until they close it, which would make sense looking at their semi-recent merger and poor behavior by the CEO.

Seriously, I was looking into Unity at the start of Covid while laid off, and DOTS was "around the corner" even THAT far back!

They still don't have an answer for a network stack, and now LOD is broken? LMAO.

Unity has been a dirty word for me for a number of years. This is the pay-off for dismissing people's concerns and insisting it will buff out eventually.



When reading sections of the article about Unity's permanently experimental features, I was wondering why they didn't use a different engine (probably because their expertise is in Unity). Does Unreal for example have support for this kind of game?

Oh and I have to mention the cascaded shadow mapping: "taking about 40 milliseconds or almost half of total frametime. ". - 40ms is 25fps all by itself!



Unless you've worked in games, it's really hard to understand just how massively tied to the engine a game is.

Imagine you have a web app written in Ruby using Rails with data stored in Postgres. You have a few hundred tables, millions of rows. Millions of lines of Ruby, CSS, and HTML. Thousands of images carefully exported as JPEG, GIF, or PNG depending on what they contain.

Now imagine being told you need to simultaneously port that to:

- Not run on the web. Instead be a native application using the OS's rendering.

- Not be written in Ruby. Instead using a different language.

- Not use Rails. Instead, use a different application framework.

- Not use Postgres. In fact, don't use a relational database at all. Use an entirely different paradigm for storing data.

- Not use CSS, HTML, and raster images. Instead, everything needs to be, I don't know, vector images and Processing code.

That's about the scale of what moving from one game engine to another can be like. Depending on the game, it's isn't always that Herculean of a task, but if often is. A mature game engine like Unity is a programming language, build system, database, asset pipeline, application framework, IDE, debugger, and set of core libraries all rolled into one. Once you've picked one, moving to another may as well just be creating a whole new game from scratch.



Unity's killer feature is its C# support. It's enabled small teams to do far more in less time that they could ever hope to do using C++, and avoiding some of the nastier types of bugs that C++ game developers have to deal with.

And if you've got a team of experienced Unity developers, some who've been working with C#/Unity for their entire game dev career, switching to C++/Unity isn't the most practical option. While many concepts are similar between engines, you're going to be back at the bottom of the learning cliff for a while.

It sounds like Unity isn't really the problem in this case, it's more about too many polygons, poor use of LOD, and sub-optimal batching (too many draw calls). It was probably more of a time pressure issue than a tech issue, as game development is usually a race against the clock.



unreal absolutely does have support for this type of game. It is really a smattering of different loosely coupled tools that you can bring into the project, or roll your own. The star is absolutely the rendering engine and visual programming tools IMO.

Unreal really excels at action games, but you can absolutely implement custom camera RTS controls and such. Batteries included but replaceable.

The question "does X engine support this kind of game" is a bit.. off. You would be amazed at how many features these engines pack in. However: You would also be amazed at how NOT "plug and play" they are. It still takes a TON of effort and custom code to make a sophisticated game.



It sounds like these issues are relatively fixable. It's a classic victim of the Unity engine's tech debt though. I use Unity myself and they desperately need to decide on how they want people to make video games in their engine. They can't have three rendering pipelines and two ways of adding game logic that have a complicated matrix of interactions and missing features. And not great documentation and a bad bug reporting process.


It makes one wonder what their internal employee incentives are and if they're problematic.

Microsoft has a similar problem where nobody gets promoted from fixing bugs or maintaining stuff, everyone gets rewarded for new innovative [thing] so every two-three years there's a completely new UI framework or similar.

Although I feel like wanting to start-a-new is a common tech problem, where there are problems and everyone wants to just reboot to "fix" it rather than fixing it head-on inc. backwards compatibility headaches.



> Microsoft has a similar problem where nobody gets promoted from fixing bugs or maintaining stuff, everyone gets rewarded for new innovative [thing] so every two-three years there's a completely new UI framework or similar.

Is there any big (or even medium-sized) company where this isn't true? I feel like it's just a rule of corporate culture that flashy overpromising projects get you promoted and regularly doing important but mundane and hard-to-measure things gets you PIP'd.



It's a matter of letting things degrade so that the maintenance becomes outright firefighting. I am currently working on a project where a processing pipeline has a maximum practical throughput of 1x, and a median day's for said pipeline is... 0.95x. So any outage becomes unrecoverable. Getting that project approved 6 month from now would have been basically impossible. Right now, it's valued at a promotion-level difficulty instead.

At another job, at a financial firm I got a big bonus after I went live on November 28th with an upgrade that let a system 10x their max throughput, and scaled linearly instead of being completely stuck. at their 1x. Median number of requests per second received in dec 1st? 1.8x... the system would have failed under load, causing significant losses to the company.

Prevention is underrated, but firefighting heroics are so well regarded that sometimes it might even be worthwhile to be the arsonist



Intuitively, "fixing life-or-death disasterss is more visible and gets better rewards than preventing them" doesn't seem like it should be a unique problem of software engineering. Any engineering or technical discipline, executed as part of a large company, ought to have the potential for this particular dysfunction.

So I wonder: do the same dynamics appear in any non-software companies? If not, why not? If yes, have they already found a way to solve them?



Outside of software, people designing technology are engineers. Although by no means perfect, engineers generally have more ability to push back against bad technical decisions.

Engineers are also generally encultured into a professional culture that emphasizes disciplined engineering practices and technical excellence. On the other hand, modern software development culture actively discourages these traits. For example, taking the time to do design is labeled as "waterfall", YAGNI sentiment, opposition to algorithms interviews, opposition to "complicated" functional programming techniques, etc.



That's a very idealistic black-and-white view of the world.

A huge number of roles casually use the "engineer" moniker and a lot of people who actually have engineering degrees of some sort, even advanced degrees from top schools, are not licensed and don't necessarily follow rigid processes (e.g. structural analyses) on a day to day basis.

As someone who does have engineering degrees outside of software, I have zero problem with the software engineer term--at least for anyone who does have some education in basic principles and practices.



I have yet to see, with the exception of the software world, engineering with such loose process.


As someone who was a mechanical engineer in the oil business, I think you have a very positive view of engineering processes in general.


> If yes, have they already found a way to solve them?

A long history of blood, lawsuits, and regulations.

Preventing a building from collapsing is done ahead of time, because buildings have previously collapsed, and cost a lot of lives / money etc.



I remember my very first day of studying engineering, the professor said: "Do you know the difference between an engineer and a doctor? When a doctor messes up, people die. When an engineer messes up LOTS of people die."


How do you think we got into this climate change mess?


Yeah but if you had a release target of dec 15 and it crashed dec 1st and you could have brought it home by the 7th you would have been a bigger winner. Tragedy prevented is tragedy forgotten. No lessons were learned


I spent a few weeks migrating and then fixing a bunch of bugs in 20-year old Perl codebase (cyber security had their sights set on it). Basically used by a huge amount of people to record data for all kinds of processes at work.

Original developer is long gone. Me and another guy are two of the only people (we aren't a tech company) who can re-learn Perl, upgrade multiple versions of Linux/Apache/MySQL, make everything else work like Kerberos etc...

Or maybe I'm one of the only people dumb enough to take it on.

Either way, nobody will get so much as an attaboy at the next department meeting. But, they'll know who to go to the next time some other project is resurrected from the depths of hell and needs to be brought up to date.



Facebook was pretty good about this on the infra teams. No, not perfect, but a lot better than the other big companies I was exposed to.

If anything, big companies are better about tech-debt squashing, and it's the little tiny companies and startups that are, on average, spending less time on it.



I think it is a bit tricky to get the incentives right ( since the bookkeeping people like to quantize everything). If you reward finding and fixing bugs too much - you might push developers to write more sloppy code in the first place. Because then those who loudly fix their own written mess gets promoted - and those who quietly write solid code gets overlooked.


Goodhart’s law at work, or “why you shouldn’t force information workers to chase after arbitrary metrics”. Basecamp has been famously just letting people do good work, on their terms, without KPIs.

I will preemptively agree that this isn’t possible everywhere; but if you create a good work environment where people don’t feel like puppets executing the PM’s vision, they might actually care and want to do a solid day’s work (which we’re wired for).



Is it only big companies? The fact that many companies in our industry need to do "bug squash" events because we are unable to prioritize bugs properly speaks books to meet.


Top down decision making, typically by non-technical people who often have no idea what software development even involves.

Eventually things get so bad that there's no choice but to abandon feature work to fix them.

The business loses out multiple times. Feature work slows down as developers are forced to waste time finding workarounds for debt and bugs. The improvements/fixes take more time than they would have due to layers of crap being piled on top, and the event that forces a clean up generally has financial or reputational consequence.

Collaborative decision making is the only way around this. Most engineers understand that improvements must be balanced with feature work.

I find it very strange that the industry operates in the way it does. Where the people with the most knowledge of the requirements and repercussions are so often stripped of any decision making power.



This is pretty much a universal thing--whether it's software development or home maintenance. It's really tempting to kick the can down the road to the point where 1.) You HAVE to do something; 2.) It's not your problem any longer; or 3.) Something happens that the can doesn't matter any more.

I won't say procrastination is a virtue. But sometimes the deferred task really does cease to matter.



At least we wouldn't do that on a planetary scale, right?


> Is there any big (or even medium-sized) company where this isn't true?

Valve?



From everything I've read Valve has exactly the same problem. Stack rating isn't immune. New features still get rewarded the most.


Aviation. Software will often spend ten times as long in QA and testing as it will in principal development.


It seems endemic, especially everywhere that's not a product company. I think it was mythical man month (maybe earlier) that pointed out the 90% of the cost of software is in maintenance, yet 50 years on this cost isn't accounted for in project planning.

Consultancies are by far the worst, a project is done and everyone moves on, yet the clients still expect quick fixes and the occasional added feature but there's no one familiar with the code base.

Developers don't help either, a lot move from green field to green field like locusts and never learn the lessons of maintaining something, so they make the same mistakes over and over again.



https://www.thepeoplespace.com/practice/articles/leadership-...

It’s very rare, this is one of the only places I can imagine something like that happening.



The developers of Cities Skylines has less than 50 employees in total, it's a small developer based in Finland (Colossal Order), I doubt they have those sort of issues at that scale, that's usually something that happens with medium/large companies.

Edit: seems I misunderstood, ignore me



Talking about Unity, not Colossal Order.


Unity, not cities skylines.


It's a combination of team not given enough time amd headcount to maintain and develop a product and another team's manager wants to grab a fief.

So old products are thrown away while new products with similar functionalities are being created.

Both teams are happy. The users suffer.



We're weeks past a very public pricing change that cost Unity market reach amidst competitors and open source projects; and that led to a CEO change. There are problems beyond what the employees can realistically fix.


I worked at Unity on Build Automation/Cloud Build for nearly a decade. Let me assure you, that tech debt is NOT being fixed any year soon. It’s due to a fundamental disconnect between executive leadership wanting to run the company like Adobe (explicitly) and every engineer wanting to work like a large scale open source project (Kubernetes, Linux, and Apache are pretty close in style). The only way anything gets built is as a Skunkworks project and you can only do so much without funding and executive support.


> run the company like Adobe (explicitly)

what does this mean?



Can you elaborate on this?


Sounds like every other enterprise software platform. Unity has reach the IBM level of “no one gets fired for choosing X,” even though X only makes the business people happy.


Honestly, automatic LOD generation would solve at least some of the performance issues: add the functionality, make it opt-out for those that don't need LODs and enjoy performance improvements in most projects, in addition to some folks getting a simpler workflow (e.g. using auto-generated models instead of having to create your own, which could at the very least have passable quality).

Godot has this: https://docs.godotengine.org/en/stable/tutorials/3d/mesh_lod...

Unreal has this (for static meshes): https://docs.unrealengine.com/5.3/en-US/static-mesh-automati...

Aside from that, agreed: the multiple render pipelines, the multiple UI solutions, the multiple types of programming (ECS vs GameObject) all feel very confusing, especially since the differences between them are pretty major.



I'm pretty sure unity already has that.


Out of the box, it only has manual LOD support for meshes: https://docs.unity3d.com/Manual/importing-lod-meshes.html (where you create the models yourself)

They played around with the idea of automatic LOD, but the repo they had hasn't gotten updated in a while: https://github.com/Unity-Technologies/AutoLOD

The closest to that would be looking at assets on the Asset Store, for example: https://assetstore.unity.com/packages/tools/utilities/poly-f...

An exception to that is something like the terrain, which generates the model on the fly and decreases detail for further away chunks as necessary, but that's pretty much the same with the other engines (except for Godot, which doesn't have a terrain solution built in, but the terrain plugins do have that functionality). I guess in Unity's case you can still get that functionality with bought assets, which won't be an issue for most studios (provided that the assets get updated and aren't a liability in that way), but might be for someone who just wants that functionality for free.



Ok that's pretty surprising. I didn't know it was still this bad


It doesn’t which is really annoying.


It sounds like they need to implement easy to use Level of Detail (LOD) and progressive meshes. 100,000 vertices on far away objects will break most rendering pipelines that do not somehow reduce them. 100,000 complicated matrix interactions instead of the like, 8, it probably takes really far away.

[1] https://en.wikipedia.org/wiki/Level_of_detail_(computer_grap...

[2] https://en.wikipedia.org/wiki/Progressive_meshes



It's honestly a bit insane.

Just the other night I wanted to know what it'd take to do some AR development for the Quest 3 using Unity. 10 minutes in I was straight up confused. There's AR Foundation, AR Core, AR Kit, and I think at least one other thing. I have no idea the difference between those, if they're even wholly separate. That's on top of using either the OpenXR or Unity plugin for the actual headset.



AR Kit is Apple's thing. AR Core is Google's thing. Neither of those are Unity's fault. AR Foundation is a Unity layer to present a common interface. Which of my books is a good thing.

Open XR is also an an attempt to make a cross platform layer for vendor specific APIs. Again not Unity's fault. The Unity plugin system is a common interface for all XR devices.

I'd generally support your sentiment but in this case you're picking on things where Unity had mostly got it right.



This comment made things more clear to me than the documentation that I saw, perhaps because I was looking at it from a Quest 3 perspective and their main page about it doesn't mention that at all.

I know what Open XR is all about but again, it's not clear which you should actually use for development if you're only targeting Quest devices, for instance. A little extra documentation would go a long way.

The same goes for all of the other things so frequently mentioned like their renderers.



Honestly, like a lot of Unity functionality, what you do there is pay $50-100 for an asset from the store that handles things in a sane way.


I think it's a good thing in the long run, one more reason to switch away from Unity to add to the ever growing pile.


It's a classic victim of shitty, shitty software developers who blames tools rather than taking ownership.

Or shitty software dev companies that push out crap to meet marketing deadlines.

Either way, take your money elsewhere.



Given that a number of other Unity-based games have had the same or similar performance issues, including KSP1, the Endless games, and others, it seems the problem is very much that Cities Skylines 2 is hitting up against the performance limits that the Unity engine is capable of without custom modifications to the engine-layer codebase.


I have personally been responsible for optimizing unity games you haven't heard issues like this about ;)

This write-up really points the finger at not solving occlusion culling or having good LOD discipline.

Give a person a dedicated optimization mandate and you can avoid most of this. One of the first things I do when I'm profiling is to sort assets by tris count and scan for excess. I wonder if they had somebody go through and strategically disable shadowcasting on things like those teeth? I am guessing that they made optimization "everybody's responsibility" but nobody had it as their only responsibility.



Yeah I mean regardless of any of Unity’s limitations, this is entirely upon the developer.

However, I also find the suggestion that because there are other high profile examples of unity projects with performance issues, it must be a problem with unity.

You don’t hear that about Unreal Engine, despite the fact that there are poorly optimized UE games.

Such a bizarre set of assumptions.



Occlusion culling and LOD should be handled by the engine, not the game logic, so the write-up really points to the problem being Unity's new and very incomplete rendering pipeline for ECS.


Granted I know next to nothing about game development, but aren't LOD models made by hand?


There are tons of answers to this! I'm going to say that in projects I've worked on, LODs have been hand made about 60% of the time.

There are tools for creating automatic LODs that come with their own pro's and con's. A bad LOD chain can express itself as really obvious pop-in while you're playing the game. There's also these things called imposters that are basically flipbook images of an object from multiple angles that can be used in place of the true 3d geometry at a distance. Those are created automatically. They tend to be like 4 triangles but can eat more vram because of the flipbook sizes.

Unreal engine has nanite, which is a really fancy way to side step needing LOD chains with something akin to tessellation, as I understand it. Tech like that is likely the future, but it is not accurate to describe it as the "way most games are made today"



This is simply not correct. City Skylines 2 even went through the trouble of using DOTs which is something you cannot take advantage of in Unreal Engine or Godot. To get more optimal than that on the CPU utilization side, you will be writing your own engine in C++ or Rust.

The fuck up here is whoever was handling the art assets. You simply do not ship a game with such detailed graphics and no LODs. They must've simply been downloading things off the asset store and throwing them in without any regard for performance.



I'll be really surprised if City Skylines's team didn't have access to Unity's source code.


And do they have the number of engineers with the required skills to rewrite half the engine? Especially if the reason why they developed using those tools and engine is they expected not to have to do it themselves in the first place?

It's not like there's just some "go_slow=true" constant that just needs changing.



It's pretty unlikely. A source code license is negotiated with the sales team directly and costs at least USD100k last I heard (the price is not publicly disclosed). They're also reluctant to give source code licenses at all.


It's kind of stunning that a game of this magnitude is able to go out the door without model LOD.

I suppose the fact that it runs at all is stunning -- surely you could not get away with this a decade or two ago -- but perhaps it speaks to the incredible capabilities of modern hardware. This feels a bit similar to the Electron criticism, where convenience ultimately trumps performance, and users ultimately don't care. I wonder how this will play out in the long run.

Bizarre and at least for me, equally sad. I long for the days of a tuned, polished game engine squeezing every inch of performance out of your PC.



Valve/Steam should really have some policies around unfinished or unpolished games so that they are forced to be marked as "early access"

It is absolutely ridiculous that these developers can get away with releasing a beta (essentially what it is) and setting the full release price without the end user knowing they're a guinea pig.



Read the reviews and don't buy it. This works fantastically as a punishment already without ham-handed, opaque moderation.


If this works fantastically, then why is it at the top sellers list?

https://store.steampowered.com/search/?supportedlang=english...



Because the problems are vastly overstated. I've been playing fine since launch without any issues.


Early access is not a "punishment" though. It's a system that already exists on Steam.


But when used in the way you're describing, valve forcing it in a developer instead of a developer opting in, it becomes a form of punishment.


Early Access is a "punishment" purely because game devs and publishers use it as an excuse to sell incomplete, broken products.

The terrible reputation is self-inflicted and deserved.



I wonder if releasing a widely anticipated game unfinished is sometimes actually strategically beneficial marketing wise. Perhaps it's a marketing dark pattern?

It makes the game stay in people's minds longer because people keep coming back to it asking "is it good yet, have they fixed it yet?". It kind of feels it has worked like that for Cyberpunk. If it's a finished game on launch day people will quickly make up their minds if it's for them and then move on.

Personally I would be on the fence about buying it even if it was good on launch and I would probably not buy it straight away. But I might just change my mind if I get reminded of it enough times. Then again I felt like that about Cyberpunk as well and I still haven't bought it.



But it’s not “half finished”!

It has some performance issues. Not the same thing.



Sure, I should've picked a better word there.


I think half-finished is a good way to describe the state of day-1 releases of games these days. Look back on other games (and non-game software) and measure A. the amount of time between when the developer started and the first release, and then B. the total amount of time it took to get to the final patch. I bet for many, MANY games, A ≤ B/2: They were literally "half-finished" in terms of time, on first release.


I think Cyberpunk was only able to turn itself around because the studio got so famous with The Witcher and people were willing to give them another chance. If they hadn't been famous already, it'd just have been another rando shitty game on Steam, of which there are thousands...

But then there are stories No Man's Sky too, which had a miraculous turnaround as well. So maybe it can happen sometimes...



I have a more conspiratorial opinion of Cyberpunks criticism. If you look at Cities Skylines 2 and many other AAA titles, they are just as bad or worse. Yet, the one company that sought to fight the way other developers and publishers behaved got a shitload of criticism for what was - in my opinion - a much smaller problem. CS:2 is much worse, Starfield is way more buggy, etc.

In my opinion, Cyberpunk was punked by the gaming industry to harm CD Project Red. I have no proof at all, of course.



The anime Cyberpunk: Edgerunners had a significant impact on getting people to look at the game again.

That's a black swan that can't easily be replicated.



I think their reviews are punishment enough. "Very Positive" for CS:1 and "Mixed" for CS:2. And if it improves over time, the reviews improve with them!

Cyberpunk was a good example of that. And the graphs make it really easy to see how it's changed over time: https://store.steampowered.com/app/1091500/Cyberpunk_2077/#a...



Which only incentivize companies to publish unfinished products with “will fix it later” ideology.


Is that a big deal? That's easy to avoid if you don't pre-order games and just wait for the day 1 reviews. Even if you did end up with a shitty situation, Steam lets you refund the games with minimal hassle.

On the other hand, there are players who'd rather have the game earlier (like me) than a few months later, despite its launch issues.

The alternative approach -- Baldur's Gate 3 being in Early Access forever -- is fine too, but damned if that wasn't a long wait.

Maybe the compromise is bigger companies being willing to release in Early Access more often. That shouldn't be limited to just indie companies, but any publisher that wants early and broad public feedback.

Especially for a city-builder game (where there isn't really a campaign or spoilers), I don't see why not...



Okay for real, who decided this game could be have the acronym CS? Counterstrike has been one of the most played games since 2000. I don’t even play Counterstrike but it has a huge player base compared to this game. And didn’t Counterstrike 2 literally come out a few weeks ago?


Heh, good point.

Also, I really wish Apple chose some other name for its Game Porting Toolkit... hard to find relevant discussions in the sea of "other" GPT talk.



To make matters worse when it just released, the two top games on Steam were CS:2 and CS:2.


The acronyms are not the same and the article properly writes that.

CS2 -> Counter Strike 2 C:S2 -> Cities: Skylines 2



They let you return the game no questions asked if you haven't played it for two hours.


True. Marking it early access would just save more peoples time and be more explicit about the current state of the game.


Valve/steam should absolutely not be doing that


Why not? There's already a hardware survey & they could easily have an opt-in system that reports the user's average framerate while playing games. If the average hardware specs can't run that game >=60fps >=90% of the time on any graphical setting then it's beyond fair to give it a "Hardware reports indicate that this game performs poorly" label.


It's high time telemetry served users tbh. I would 100% opt-in to telemetry if it would tattle on badly made or misbehaving software.


I like that!


Why not? They're still able to list the game and sell it.

I don't see the issue with making it more clear to end users that they're beta testing a game.



Yeah they need to start moderating quality.


market forces ...

but yea, source2 engine could have used some more love before going live



This old chestnut again.

Software is not “essentially a beta” because it doesn’t meet a bunch of entitled users’ arbitrary definitions of “finished”.

A game having some performance issues doesn’t mean you’re a beta tester.

Did you even buy this game? I suspect not.



No, I didn't buy it because it's $50 and you have to follow guides and tricks to get it running optimally. That's not what I expect from a game released at full price.


You have a lot of opinions about a game you've never played.


Enlighten me.


> I long for the days of a tuned, polished game engine squeezing every inch of performance out of your PC.

Have you heard of Factorio :)



how do they keep finding things to improve in their FridayFunFacts? the fame is an absolute gem


Can someone expand on exactly what is meant by "model LOD" in this context?

Does the commenter mean they should have implemented a system to reduce texture resolution or polygon count dynamically, eg. depending on what's in view or how far away it is? That the artists should have made multiple version of assets with coarse variants removing things like computer cables from desks in buildings?



> Can someone expand on exactly what is meant by "model LOD" in this context?

Back in the day, games just had one version of each model, that gets loaded or not.

Nowadays, games with lots of models and huge amount of detail lets each model have multiple different versions, with their own LOD (Level of Detail).

So if you see a tree from far away, it might be 20 vertices because you're far away from it so you wouldn't see the details anyways. But if you're right next to it, it might have 20,000 vertices instead.

It's an optimization technique to not send too much geometry to the GPU.



Yes, that. It’s short for “level of detail”.


There were already some great explanations in the replies, but here are a few videos too:

Basic overview: https://www.youtube.com/watch?v=mIkIMgEVnX0

Or a more detailed one: https://www.youtube.com/watch?v=TwaS5YuTTA0

It's been a standard technique in video games for... decades now.



Traditionally that is a thing that is done by modelers for video games yes.


It's stunning and completely unacceptable. This is a product that is not fit for purpose. I hope the developers are embarrassed by what they have produced.


This is rarely developers' fault. You can bet they wanted to deliver the best product possible, but were not given the time needed to do that by upper management.


I didn't blame the developers. I have been involved in projects that I'm embarrassed by even though the worst decisions were the ones made by upper management (hell I worked on the new Jira front end, a continuing source of humiliation).


Why is it impossible that maybe Cities Skylines simply has shitty developers?


Because if you've worked on any large scale software project, technical issues like this are almost always because of priorities and timeline.


I think the developers are too overworked to feel embarrassment from a random person on the internet criticizing them.


It is astonishing indeed. In the long run I hope software catches up with the majority of other industries which has already realized that minimizing waste is a good idea. It might take a while but unless we get room temperature super conductivity or some other revolutionary tech first we will start thinking about efficiency again sooner or later.


I don't remember game engines ever being polished and tuned. If you needed optimization it was always on the community to figure it out. Usually they'd do a good job though and make it go up 10 fold compared to what the game devs came up with.


The last one wasn't better either... most games nowadays have game breaking bugs at launch.


Don't forget the part where they use web tech and waste draw calls like crazy on the UI. These things should literally be banned.

edit: not web tech should be banned, but releasing a game with horrible optimisation like this, either by the store selling the game or by the law



Where are you getting this from? I'm literally sitting with the game open right now with the Chrome Devtools connected to it, and I'm seeing no unnecessary modifications on the DOM side of things.

Could be that the integrated the Gameface library incorrectly I guess? Still interested in more details from you.



"Quite a lot of draw calls are used for the Gameface-powered UI elements, though ultimately these calls are very fast compared to the rest of the rendering process."

Literally quoted from the article. Standard 2D UI like that can be done in as little a single draw call (or so I have read, never actually done it)



If you composite on the CPU I guess? no that React/Webpack UI is actually a pretty good solution to complex game UIs. It offers great DX while the performance penalty is miniscule compared to a huge deferred render pipeline. Btw the last Sim City used the web platform for UI too.


the last Sim City is not something that should be held as a model of what to do here.

Great "DX" now when your building it, but good luck maintaining it over the long term.



Directly from the article:

"The last remaining draw calls are used to render all of the different UI elements, both the ones that are drawn into the world as well as the more traditional UI elements like the bottom bar and other controls. Quite a lot of draw calls are used for the Gameface-powered UI elements, though ultimately these calls are very fast compared to the rest of the rendering process. "

With this minimalistic, flat-style soulless UI, the correct number of draw calls spent on UI should be single digits....



The author calls out render passes that take 100us, and considers this pass too fast to give a number to.

Why does it matter if it's 5 render calls or 500? The developers clearly have plenty of work to do optimizing the other 70ms, it doesn't make sense for them to spend any time working on this.



Not sure what kind of projects you've worked on before, but the ones I've been involved in, you wouldn't spend time optimizing something taking Why on earth would they try to optimize how the UI renders when they're having big issues elsewhere?


Sorry, my initial comment probably come off quite differently. It's not that "using web UI" is the reason why this game has awful performance, that's more like a bellwether for the studio's priorities.

It's absolutely not the most important, or even in the top 10 most important problems here, but it shows really illustratively how much they care about making a game which performs in an acceptable way. (which is: not that much)

Also, it's not even one or two high-poly models dragging the performance down, what I was aiming at is that the game suffers from death by a thousand cuts - the LoDs are only a part of the issue, almost every part of the game is done in a sub-optimal way. So while the UI is not a significant part of the frame time, if they fix the most glaring performance issues, they will find that there won't be a silver bullet, the game is just a pile of small performance problems all the way down.



> It's not that "using web UI" is the reason why this game has awful performance, that's more like a bellwether for the studio's priorities.

All it shows is that optimizing something that was already fast enough was not a priority. But why would you want it to be?



Sometimes you can take those small design decisions as indicative of how decisions are made in a company.

I remember back when Apple removed the SD card slot from their phone. It wasn't a deal-breaking change, because their phones had lots of memory built in... but at the time, it seemed to me that the decision makers that did that, would continue to make similar decisions, each one making me unhappier and unhappier. So I bought accordingly. And I feel justified in retrospect.

Way back in the design stages of this game, somebody though that using web tech for their UI was a good idea, and it didn't get vetoed. That indicates bad judgement somewhere in the chain. Who knows whether it's developers, managers, or what... but the best predictor of bad decisions is previous bad decisions. It's at least a dysfunction smell.



Respectfully, just take the L on your original comment and move on. It’s ok to be wrong.


>"These things should literally be banned."

Not "metaphorically", but "literally"? Or are you using "literally" in its non-literal sense? And "banned", not "discouraged", or simply "ridiculed" like you're trying to do?

That is literally (to use the term in its literal sense) an extremely brash statement, quite a lot to walk back in reverse into the shrubberies. Have you actually tried to develop a UI in Unity that approaches the quality you can easily (and cheaply and quickly and maintainably) implement in a web browser? And have you ever tried to find someone to hire who was qualified to do that (and then put them to work on the UI instead of the game itself), compared to trying to find someone to hire who can whip out a high quality performant web user interface in a snap, that you can also use on your web site?

Not to mention that you used web tech to call for the literal banning of web tech.



Using web tech for the UI isn't a problem here. The article, when measuring the performance impact of different rendering phases, describes the time the UI requires as "an irrelevant amount of time."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com