(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40286734

用户对最近将 RAM 焊接到计算机板上而不是使用 Socket DIMM 插槽进行升级的趋势表示失望。 他们承认这一决定背后的技术原因,包括保持精确距离和避免高频电子设备中边缘连接器引起的问题的重要性。 尽管承认利润动机在这一变化中的作用,但用户质疑节能是否值得通过更换整台计算机而产生电子垃圾。 他们批评戴尔早期尝试标准化一种名为 CAMM 的新型内存,该内存成为当前 LPDDR5 CAMM2 和 DDR5 CAMM2 标准的基础。 用户经常面临从 Digital River 购买某些联想产品备件的挑战,从而导致负面体验。 该用户最后质疑将计算机限制为单一任务的必要性,并表达了对未来几代 LPDDR5 CAMM 需要更多比特的担忧,这可能要求苹果公司尽管之前反对,但仍要采用它。 此外,他们还讨论了对高性能内存不断增长的需求,特别是在 iGPU 和推理方面。 该用户还分享了他们对联想通过 Digital River 处理备件和分销不善的看法。 他们建议出售翻新或二手电脑作为限制电子垃圾的潜在替代方案。 最后,他们解决了与这一变化相关的功耗问题,并考虑了这些改进是否真正有必要,因为紧密集成提供了多种优势,例如改进的信号完整性、更好的热流、更小的封装和更低的成本。 总体而言,他们反对将易于升级性置于这些技术进步之上。

相关文章

原文


I’m glad they explained why RAM has become soldered to the board recently. It’s easy to be cynical and assume they were doing it for profit motive purposes (which might be a nice side effect), but it’s good to know that there’s also a technical reason to solder it. Even better to know that it’s been recognized and a solution is being worked on.


I didn't find that a particularly complete explanation - and the slot can't be closer to the CPU because? - I think it must be more about parasitic properties of the card edge connector on DIMMs being problematic at lower voltage (and higher frequencies) or something. Note the solution is a ball grid connection and the whole thing's shielded.

I suppose in fairness and to the explanation it does give, the other thing that footprint allows is a shorter path for the pins that would otherwise be near the ends of the daughter board (e.g. on a DIMM), since they can all go roughly straight across (on multiple layers) instead of a longer diagonal according to how far off centre they are. But even if that's it, that's what I mean by it seeming incomplete. :)



> and the slot can't be closer to the CPU because?

All the traces going into the slot need to be length-matched to obscene precision, and the physical width of the slot and the room required by the "wiggles" made in the middle traces to length-match them restrict how close you can put the slot. Most modern boards are designed to place it as close as possible.

LPCAMM2 fixes this by having a lot of the length-matching done in the connector.



Generally speaking, layout for modern DRAM (LPDDRx, etc.) is a giant pain. Trace width, differential trace length matching, spacing, number of vias, and more.

And all this is needed even though the DRAM signaling standard has extensive measurement and analysis of the traces built right into the hardware of the DRAM and the memory controller on the processor. They negotiate the speed and latency at runtime.

Giant pain.



Yeah, you can only make the furthest RAM chip in DIMM be so close to the CPU based on the form factor, and the other traces need to match that length. Distance is critical and edge connectors sure don't help.


I didn’t really appreciate the insanity of the electrical engineering involved in high frequency stuff till I tried to design some PCBs. A simplistic mental model of wires and interconnects rapidly falls apart as frequencies increase


If they soldered a decent amount that gou can be sure you don't ever need to upgrade it would be fine (seriously, 64GB ram costs like 100eur, non issue in a 1000eur laptop). 8 is not enough already and 16 will soon be limiting too.


Is the goal to not have any computers that are limited to a single task? Tons of corporate IT purchases go to someone only using e.g. Word all day. Do we really care if they are provisioned with “enough” memory for you or me?


The baseline 14" MacBook Pro that costs $1600 has 8GB of shared RAM. That's not enough. I don't believe OP is talking about machines better suited for your task, machines in the $1k range.


They can have their technical fig leaf to hide behind but in practice, how many watts are we really saving between lpddr5 and ddr5? is it worth the ewaste tradeoff to have a laptop we can't modularly upgrade to meet our needs? I would guess not.


> how many watts are we really saving between lpddr5 and ddr5?

From what I gathered, it's around a watt per when idling (which is when it's most critical): the sources I found seem to indicate that ddr5 always runs at 1.1V (or more but probably not in laptops), while lpddr5 can be downvolted. That's an extra 10% idle power consumption per.



Yeah, I was actually surprised to learn there was a reason other than "Apple wants you to buy a new Macbook or overspec your current one". It's annoying, but at least there's a plausible reason to why they do it.


M1 Macs started shipping in late 2020, so, for some definitions of "recently", sure.

It's true for any laptops that can be reasonably described as having a "SoC" and not CPU, anyway.

(I guess you could be extremely pedantic and try to argue that T2 counted as SoC? But clearly not what I meant.)



EEPROM like DIP packaging where it was damn near impossible to pull without bending a pin and/or smacking your hand on something?

God forbid someone steps on it too, I think I might still have some scars on my feet.



The problem is getting manufacturers to implement the new RAM standard. While the justifications given are great for the consumer, I didn't see any reason for a manufacturer to sign on.

They are going to lose money when people buy new RAM, rather than a whole new laptop. While processor speeds and size haven't plateaued yet, it's going to take a while to develop significant new speed upgrades and in the meantime, the only other upgrade is disk size/long-term storage, which, aside from Apple, they don't totally control.

So, why should they relenquish that to the user?



> While the justifications given are great for the consumer, I didn't see any reason for a manufacturer to sign on. [...] So, why should they relenquish that to the user?

It makes sense that the first ones to use this new standard would be Dell and Lenovo. They both have "business" lines of computers, which usually offer on-site repairs (they send the parts and a technician to your office) for a somewhat long time (often 3 or 5 years). To them, it's a cost advantage to make these computers easier to repair. Having the memory (which is a part which not rarely fails) in a separate module means they don't have to replace and refurbish the whole logic board, and having it easy to remove and replace means less time used by the on-site technician (replacing the main logic board or the chassis often means dismantling nearly everything until it can be removed).



> To them, it's a cost advantage to make these computers easier to repair.

Alternatively, it allows them to use more efficient RAM in computer lines they can't make non-repairable so they can boast of higher battery life.



> They are going to lose money when people buy new RAM, rather than a whole new laptop.

You're thinking about this the wrong way around.

Suppose the user has $800 to buy a new laptop. That's enough to get one with a faster processor than they have right now or more memory, but not both. If they buy one and it's not upgradable, that's not worth it. Wait another year, save up another $200, then buy the one that has both.

Whereas if it can be upgraded, you buy the new one with the faster CPU right away and upgrade the memory in a year. Manufacturer gets your money now instead of later, meanwhile the manufacturer who didn't offer this not only doesn't sell to you in a year, they just lost your business to the competition.



I doubt the consumer mass that actually matters to manufacturer's earnings understands RAM value and if the computer they are buying is RAM-upgradable or not.

They are going to buy the 800$, any of the two, complain when it inevitably "works slower" in a couple of years (if they are lucky), and buy a new 800$ once again then. I don't see the manufacturer's motivation to offer upgradable RAM.



They don't have $800 to buy another one so soon. So they take the one that "works slower" to some tech who knows the deal and tells them this machine sucks because you can't upgrade it, and now they think your brand is crap (because it is), curse you for the next however many years until they have the money and then buy the next one from someone else.


I'd see two angles:

- the manufacturer themselves benefit from easier to repair machines. If DELL can replace the RAM and send back the laptop in a matter of minutes instead of replacing the whole motherboard to then have it salvaged somewhere else, it's a clear win.

- prosumers will be willing to invest more in a laptop that has better chance to survive a few years. Right now we're all expecting to have parts fail within 2 to 3 years on the higher end, and budget accordingly. You need a serious reason to buy a 3000$/€ laptop that might be dead in 2 years. Knowing it could weather RAM failure without manufacturer repair is a plus.



Unlike Apple, where they are in in-direct competition on computer hardware, For PCs, If Lenovo starts doing it, then it's a marketing point. now Asus, HP, Dell would try and get it.

So it's the egg and the chicken where if it'll be important to consumers, it might end up as catching up.



Ugh, finally. And it's not just a repurposed desktop memory standard either! The overall space requirements look to be similar to the BGA that you'd normally solder on (perhaps 2-3x as thick?). I'm sure they can reduce that overhead going forward.

I love the disclosure at the bottom:

Full Disclosure: iFixit has prior business relationships with both Micron and Lenovo, and we are hopelessly biased in favor of repairable products.



> Ugh, finally.

FYI, the '2' at the end is because this isn't the first time this has been done. :)

LPCAMM spec has been out for a while. LPCAMM2 is the spec for next-generation parts.

Don't expect either to become mainstream. It's relatively more expensive and space-consuming to build an LPCAMM motherboard versus dropping the RAM chips directly on to the motherboard.



My recollection of this is that LPCAMM was a proposal from Dell that they put into the JEDEC standardization process, and LPCAMM2 is the resulting standard, named that way to avoid confusion with the non-standard LPCAMM that Dell trialed on a small number of commercial systems.


Almost. The Dell proposal is called CAMM, which was slightly modified during the JEDEC process and standardized as CAMM2, which is the combined with the memory type the same way DIMM was, For example LPDDR5X CAMM2 or DDR5 CAMM2. LPCAMM2 is not a name used in any JEDEC standard or even referred to anywhere on their site, but it seems to be used by both the memory manufacturers and the users because it's less of a mouthful, and they feel there needs to be more to distinguish between LPDDR5 CAMM2 and DDR5 CAMM2 because they are not electrically compatible.


Not to mention putting the RAM directly on a System-in-Package chip like Apple does now. That's going to be unbeatable in terms of space and possibly have an edge when it comes to power consumption too. I wouldn't be surprised if future standards will require on-package RAM.

I kind of wish we could establish a new level in the memory hierarchy. Like, just make a slot where you can add slower more power hungry DDR RAM that acts as a big cache for the NVM storage, or that the OS can offload some of the stuff in main memory if it's not used much. It could be unpopulated in base models, and then you can buy an upgrade to stick in there to get some extra performance later if needed.



Yeah they even gloss over Lenovo's crappy soldered on the motherboard USB-C connectors which is always the weak point on modern thinkpads. Well that and Digital River (Lenovo's distributor) carries absolutely no spare parts at all for any Lenovos in Europe, and if they do they only rarely turn up, so you can't replace any replaceable bits because you can't get any.


Digital River is shit at everything. From spare parts, to delivery and tracking, to customer communications, to warranty claims. Every single interaction with them is a nightmare. It is the single reason I prefer to buy Lenovo from resellers rather than directly.


This is fantastic news. Hopefully the cost to manufacturers is only marginal and they find a suitable replacement for their current "each tier in RAM comes with a 5-20% price bump" pricing scheme.

Too bad apple is almost guaranteed to not adopt the standard. I miss being able to upgrade the ram in macbooks.



> Too bad apple is almost guaranteed to not adopt the standard.

Apple would require multiple LPCAMM2 modules to provide the bus width necessary for their chips. Up to 4 x LPCAMM2 modules depending on the processor.

The size of each LPCAMM2 module is almost as big as the entire size of an Apple CPU combined with the unified RAM chips, so putting 2-4 LPCAMM2 modules on the board is completely infeasible without significantly increasing the size of the laptop.

Remember, the Apple architecture is a combined CPU/GPU architecture and has memory bandwidth to match. It's closer to your GPU than the CPU in your non-Mac machine. Asking to have upgradeable RAM on Apple laptops is akin to almost like asking for upgradeable RAM on your GPU (which would not be cheap or easy)

For every 1 person who thinks they'd want a bigger MacBook Pro if it enabled memory upgrades, there are many, many more people who would gladly take the smaller size of the integrated solution we have today.



> Up to 4 x LPCAMM2 modules depending on the processor.

The non-Pro/Max versions (e.g. M3) uses 128-bits, and arguably is the kind of notebook that mostly needs to be upgraded later since they commonly come with only 8GB of RAM.

Even the Pro versions (e.g. M3 Pro) use up-to 256-bits, that would be 2 x LPCAMM2 modules, that seem plausible.

For the M3 Max in the Macbook Pro, yes, 4 x LPCAMM2 would be impossible (probably). But I think you could have something like the Mac Studio have them, that is arguably also the kind of device that you probably want to increase memory in the future.



Apple ships 128 bit, 256 bit, and 512 bit wide memory interfaces on laptops (up to 1024 bit wide on desktops).

Is it feasible to fit memory bandwidth like the M3 Max (512 bits wide LPDDR5-6400) with LPCAMM2 in a thin/light laptop?



Apple could just make lower tier macbooks but mac fanboys wouldnt be able to ask “but what about apples quarterly profits?”

Most macbooks dont need high memory bandwidth, most users are using their macs for word processing, excel and vscode.



As a non Mac reference, I work on a HP laptop from 2014. It was a high end laptop by then. It's between 300 and 600 Euro refurbished now.

I expanded it to 32 GB RAM, 3 TB SSD but it's still a i7 4xxx with 1666 MHz RAM. And yet it's OK for Ruby, Python, Node, PostgreSQL, docker. I don't feel the need to upgrade. I will when I'll get a major failure and no spare parts to fix it.

So yes, low end Macs are probably good for nearly everything.



Even low end gaming, simulations, and even fun webGL toys can require a fair amount of memory bandwidth with an iGPU, like apple's M series. It also helps quite a bit for inference. I MBP with a M3 max can run models requiring multiple GPUs on a desktop and still get decent perf for single users.


> I MBP with a M3 max can run models requiring multiple GPUs on a desktop and still get decent perf for single users.

Good for your niche case, the other 99.8% still only does web and low performance desktop applications (which includes IDEs)



Yes but Apple’s trying to build an ecosystem where users get highly quality, offline, low latency AI computed on their device. Today there’s not much of that. And I don’t think they even really know what’s going to justify all of that silicon in the neural engine and the memory bandwidth.

Imagine 5 years from now people have built whole stacks on that foundation. And then competing laptops need to ship that compute to the cloud, with all of the unsolvable problems that come with that. Privacy, service costs (ads?), latency, reliability.



Apple is also deliberately avoiding having “celeron” type products in their lineup because those ultimately mar the brand’s image due to being kinda crap, even if they’re technically adequate for the tasks they’re used for.

They instead position midrange products from 1-2 gens ago as their entry level which isn’t quite as cheap but is usually also much more pleasant to use than the usual bargain basement stuff.



Apple does this because their CPU and GPU use the same memory, and it's generally the GPU that benefits from more memory bandwidth. Whereas in a PC optimized for GPU work you'd have a discrete GPU that has its own memory which is even faster than that.


Perhaps future LPCAMM generations will require more bits? I still can't imagine apple using them unless required by right to repair laws. But those laws probably don't extend to making RAM upgradeable.


Hoping we see AMD Strix Halo with it's 256-bit interface crammed into an aggressively cooled fairly-thin fairly-light. But it's going to require heavy cooling to make full use of.

Heck, make it only run full tilt when on an active cooling dock. Let it run half power when unassisted.



Kinda hilarious to see gamers buying laptops that can't actually leave the house in any practical meaningful way. I feel like some of them would be better off with SFF PCs and the external monitors they already use. I guess the biggest appeal I've seen is the ability to fold up the gaming laptop and put the dock away to get it off the desk, but then moving to an SFF on the ground plus a wireless gaming keyboard and wireless mouse that they already use with the normal laptop + one of those compact "portable" monitors seems like it'd solve the same problem.


My wife can get an hour of gaming out of her gaming laptop. They're good for being able to game in an area of the house where the rest of the family is, even if that means being plugged in at the dining table. Our home office isn't close enough.

Also a gaming laptop is handy if you want to travel and game at your hotel.



I’ve been wondering for a while now why ASUS or some other gaming laptop manufacturer doesn’t take one of their flagship gaming laptop motherboards, put some beefy but quiet cooling on it, put it in a pizza-box/console enclosure, and sell it as a silent compact gaming desktop.

A machine like that could still be relatively small but still be dramatically better cooled than even the thickest laptop due to not having to make space for a battery, keyboard, etc.



ZOTAC does these - there are ZBOX Magnus with laptop-grade RTX 4000 series GPUs in 2-3 liter chassis. However their performance and acoustics are rather.. compromised, compared to a proper SFF desktop (which can be built in ~3x the volume)


Yeah, those look like they’re too small to be reasonably cooled. What I had in mind is shaped like the main body of a laptop but maybe 2-3x as thick (to be able to fit plenty of heatsink and proper 120/140mm fans), stood up on its side.


Unified memory is basically L3 cache speed with zero copy between CPU and GPU.

They have engineering difference. Depends on who you ask, it may or may not worth it



They will maliciously comply. They might even have 4 sockets for the 512-bit wide systems. But then they’ll keep the SSD devices soldered - just like they’ve done for a long time. Or cover them with epoxy, or rig it with explosives. That’ll show you for trying to upgrade! How dare you ruin the beautiful fat profit margin that our MBAs worked so hard to design in?!?


Apple lines perimeter of the nand chips on modern mac minis with an array of tiny capacitors, so even the crazy people with heater boards can’t unsolder the nand and replace them with higher density NAND.


This is normal. They are called decoupling capacitors and are there to provide energy if the SSD requires short bursts of it. If you put them any further away the bit of wire between them and the gate turns into an inductor and has some somewhat undesirable characteristics.

Also replacing them is not rocket science. I reckon I could do one fine (used to do rework). The software side is the bugbear.



> The first iteration, known as CAMM, was an in-house project at Dell, with the first DDR5-equipped CAMM modules installed in Dell Precision 7000 series laptops. And thankfully, after doing the initial R&D to make the tech a reality, Dell didn’t gatekeep. Their engineers believed that the project had such a good chance at becoming the next widespread memory standard that instead of keeping it proprietary, they went the other way and opened it up for standardization.


That's also nice, but the memory speed is also higher, Ddr5-7266 vs 5600 iirc. The resulting higher bandwidth translates more or less directly into more performance for the iGPU.


> LPDDR operates at lower voltages compared to DDR, giving it the edge in power efficiency. But, the lower voltage makes signal integrity between the memory and processor challenging,

Why can't the signaling channels use a higher voltage and control circuitry on the memory stick step up and step down the gain to access the memory module?



My personal understanding - for Thinkpads, it's next year. I guess Lenovo is making real-life testes with P1 here, gather feedback before addressing other families like T14/T14s


I wonder if this will bring a new widely available high-performance connector to the wider market. SO-DIMM connectors have been occasionally repurposed to other uses, most notably by Raspberry Pi Compute Models 1-3 among other similar SOM/COM boards. RPi CM4 switched to 2x 100pin mezzanine connectors; maybe some future module could use CAMM connectors, I'd imagine they are capable enough


The compression connector looks flimsier than a mezzanine so it should probably be a last resort for multi-gigahertz single-ended signaling.


The current Dell version of this: upgrade to 64GB is $1200. Found this the hard way when trying to get my engineering team what I thought would be a $200 upgrade per machine from their stock 32GB Precision laptop workstations.


Remember that Haswell laptops were the last to feature socketed CPUs.

RAM is nice to upgrade, for sure. As well as an SSD, but CPUs are still a must. I would even suggest upgradeable GPUs but I don't think the money is there for the manufacturers. Why allow you to upgrade when you can buy a whole new laptop?



I'm not sure I really get much value out of a socketed CPU, particularly in a laptop, vs something like a swappable MB+CPU combo where the CPU is not socketed.

RAM/Storage are great upgrades because 5 years from now you can pop in 4x the capacity at a bargain since it's the "old slow type". CPUs don't really get the same growth in a socket's lifespan.



As I said to the comment above, it makes perfect sense. In 2014 we purchased a dual core Haswell. Almost a decade later I revive the laptop by installing more ram, an SSD and the best possible quad core CPU for that laptop. The gain in processing power were massive and made the laptop useable again.


I'm sure it's all subjective (e.g. I'm sure someone here even considers the original dual core Haswell more than fine without upgrade in 2024) but going from a dual core Haswell to a quad core Haswell (or even a generation or two beyond, had it been supported) as an upgrade a decade after the fact just doesn't seem worth it to me.

The RAM/SSD sure - a 2 TB consumer SSD wasn't even a possible thing to buy until a year after that laptop would have come out and you can get that for <$100 new now. It won't be the highest performing modern drive but it'll still max out the bus and be many times larger than the original drive. Swap equipment 3 years from now and that's also still a great usable drive rather than a museum piece. Upgrading to a CPU that you could have gotten around the time the laptop came out? Sure, it has twice as many cores... but it still has pretty bad multi core performance and a god awful perf/wattage ratio to be investing new money on a laptop for. It's also a bit of a dead end, in 3 years you'll now have 2 CPUs so ancient you can't really do much with them.



This matches my experience. Every PC I've built over the last 30 years have benefited from memory and storage upgrades through their life, and I've upgraded GPU a few times. However, every time I've looked at upgrading to another CPU with the same socket it is either not a big enough step up, or too much of a power hog relative to the midrange CPU I originally built with. The only time I've replaced CPUs is when I've fried them :)


Yup, so I've adopted a strategy for my past few desktop builds like this:
  - Every time a new ToTL GPU comes out for a new family, buy it at retail price as soon as it launches (so, the first-available ToTL models that were big gains in perf: GTX 1080 Ti, RTX 2080 Ti, RTX 3090, RTX 4090)

  - Every other release cycle, upgrade CPU to the ToTL consumer chip (eg on a 12900KS right now, HEDT like ThreadRipper is super expensive and not usually better for gaming or normal dev stuff). I was with Ryzen since 1800x -> 3950x -> 5950x but Intel is better for the particular game I play 90% of the time.

  - Every time you upgrade, sell the stuff you've upgraded ASAP. If you do this right and never pay above MSRP for parts, you can usually keep running very high-end hardware for minimal TCO.

  - Buy a great case, ToTL >1000w PSU (Seasonic or be quiet!), and ToTL cooling system (currently on half a dozen 140mm Noctua fans and a Corsair 420mm AIO). This should last at least 3 generations of upgrading the other stuff.

  - Storage moves more slowly than the rest, and I've had cycles where I've re-used RAM as well, so again here go for the good stuff to maximize perf, but older SSDs work great for home servers or whatever else.

  - Monitor and other peripherals are outside of the scope of this but should hopefully last at least 3 upgrade generations. I bit when OLED TVs supported 4K 120hz G-Sync, so I've got a 55" LG G1 that I'm still quite happy with and not wanting to immediately upgrade, though I do wish they made it in a 42" size, and 16:10 would be just perfect.


Socket AM4 had a really good run. Maybe we just have to pressure manufacturers to make old-socket variations of modern processors.

The technical differences between sockets aren't usually huge. Upgrade the memory standard here, add or remove PCIe lanes there. Using new cores with an older memory controller may or may not be doable, but it's quite simple to not connect all the PCIe lanes the die supports.



but then what excuse would you have to throw another $500 at Asus for their latest board that while being the best chance the platform has, still feels like it runs a beta BIOS for the first 9 months of ownership?


On a laptop it's not very practical.

Because you can't swap the motherboard, your options for CPUs are going to be quite limited. Generally, only higher-tier CPUs of that same generation - which draw more power and require more cooling.

Generally a laptop is built designed to provide a specific budget of power to the CPU and has a limited amount of cooling.

Even if you could swap out the CPU, it wouldn't work properly if the laptop couldn't provide the necessary power or cooling.



I can't say I agree. Back in 2014 a laptop was purchased with a dual-core haswell CPU. 8 years later I revive the laptop by upgrading the CPU to almost the best possible CPU, which is a 4-core 8 thread CPU or 4-core 4 threads, I am unsure which of these it was, but the speed boost was massive. This is how you keep old tech alive.

And the good thing about mobile CPUs is that they have almost the same TDP across the various dual-quad versions(or whatever is the norm today).



How old was the new CPU though? Probably the same or similar generation to what it originally came with since the socket needs to be the same.

IMO the switch to an SSD would have been the biggest boost.



I'm writing this from my Framework 16 with GPU and it is the best laptop I've ever known. It's heavy and big and not the most portable, but I knew that would be the case going into it and I have no regrets


> The Framework laptop 16 features replaceable GPU.

In a way I don't mind having non-replaceable ram in the framework ecosystem as an option. Put simply because the motherboard itself is modular and needs to be upgraded for the CPU. At that point though I would prefer on integrated ram CPU/GPU.



Framework open sources most of their schematics, if I understand correctly. So it should be possible for others to use the same standard, if they wanted to. (they don't want to)


The form factor isn't great for being a vendor-neutral thing.

If we can convince the companies to actually try for compatibility, then a revival of MXM is probably a significantly better option.



MXM was problematic because the inflexibility of the form factor to upgrade a given system. If your laptop size, power and cooling was designed for a gtx1030 you couldn't replace it with a gtx1080 module.

In framework's case, the cooling is integrated in the gpu module, and both it's size, cooling and power deliver can be adjusted depending on the gpu power.



I don't mind having a wattage limit on the slot. That's easy to factor into purchasing decisions. The much bigger issues are how custom each kind was, with very limited competition on individual modules and a big conflict of interest in wanting to sell you a new laptop.

A friend of mine was betrayed on this by MSI, where laptops with GTX 900 series GPUs were promised upgrades and then when the 1000 series came out they didn't offer any. I think they did make weak excuses about power use, but a 1060 would have fit within the power budget fine and been an enormous upgrade. A few people have even gotten 1060 modules to work with BIOS edits, so it wasn't some other incompatibility. It seems like they saw they couldn't offer a 1080 and threw out the entire project and promise, and then offered a mild discount on a brand new laptop, no other recourse.



I would say it would make the most sense to have a replaceable entire ram+cpu+gpu assemble. Just have some standard form factors and connectors for external connectors.

This way, you could keep power consumption low and be able to upgrade cpu to a new generation



Laptops have always been trading size for upgradeability and other factors, and soldering everything is the way to make them tiny. If you ask me they've gotten too extreme in size. The first laptops were way too bulky, but they hit a sweet spot around 2005-2010, being just thick enough to hold all those D-Sub connectors (VGA, serial, etc).

And soldering stuff to the board is the default way to make something when upgradeability isn't a feature.



Can it become loose then suddenly not have all pins attached properly? This is something that's unlikely to happen with SODIMM slots, but I've seen so many times when screw receptacles fail.


No, RAM is not something that is exposed on the PCIe bus (which is what thunderbolt is based on). RAM has a different protocol (DDR5 in this case), and as it says in the article, is very sensitive to the distance between the CPU and the RAM. External RAM isn't really something that is viable in the modern era of computers as far as I know.


Surprisingly this is something starting to show up in the server market lately with a new protocol/tech called CXL. But yea that latency issue is still there over the distance but it'll let more remote memory type stuff start to happen. I doubt you'll do more than a few meters (i.e. within the same rack) ever but it'll likely end up getting used for so called "hyperscaler" type companies to more flexibly allocate resources, similar to how they're doing PCIe over ethernet with DPU devices right now. Unlikely that this will end up at the consumer level anytime even medium term because that kind of flexibility is still just so niche but we might see some CXL connectivity eventually for things like GPUs or other accelerators to have more memory or share better between host and accelerator.

EDIT: article about a tech demo of it on a laptop actually, hadn't seen this before: https://www.techradar.com/pro/even-a-laptop-can-run-ram-exte...



Only CXL has the potential to be outsourced to Thunderbolt, as it works off PCIe and system RAM does not. CXL (Compute eXpress Link) is a server grade technology that's really aimed at solving some problems within the high performance compute area, like cache coherency. If you don't get it, I don't either tbh.


The physical size of the socket and having the connections on the edge means you're forced to have much longer traces. Longer traces means slower signalling and more power loss due to higher resistance and parasitics.

This[1] Anandtech article from last year has a better look at how the LPCAMM module works. Especially note how the connectors are now densely packed directly under the memory chips, significantly reducing the trace length needed. Not just on the memory module itself but also on the motherboard due to the more compact memory module. It also allows for more pins to be connected, thus higher bandwidth (more bits per cycle).

[1]: https://www.anandtech.com/show/21069/modular-lpddr-becomes-a...



> power losses are going to be small compared to the rest of the stack

While certainly not the largest losses, they do not appear insignificant. In LPDDDR4 they introduced[1] a new low-voltage signalling, which I doubt they could have gotten working with SODIMMs due to the extra parasitics.

If you look at this[2] presentation you can see that at 3200MHz a DDR4 SODIMM would consume around 2 x 16 x 4 x 6.5mW x 3.2GHz = 2.6W for signalling going full tilt. Thanks to the new signalling LPDDR4 reduces this by 40% to around 1.6W.

Compare that to a low-power CPU having a TDP of 10W or less a full 1W reduction per SODIMM just due to signalling isn't insignificant.

To further put it into perspective, the recent Lenovo ThinkPad X1[3] uses around 4.15W average during normal usage, and that includes the screen.

Obviously the memory isn't going full tilt at normal load, but say average 0.25W x 2 sticks would reduce the X1's battery lifetime by 10%.

edit: yes I'm aware the presentation is about LPDDR4 yet the X1 uses LPDDR5, just trying add context using available sources.

[1]: https://www.jedec.org/news/pressreleases/jedec-releases-lpdd...

[2]: https://www.jedec.org/sites/default/files/JY_Choi_Mobile_For...

[3]: https://www.tomshardware.com/reviews/lenovo-thinkpad-x1-carb...



It requires too much power, according to the article. This allows using "LP" (Low Power) parts to be removable, they normally have to be soldered on board close to the CPU because of the low voltage tolerances.


One of the biggest problems is that edge connections don't give you enough density. Edge connections are great for serves where you stack 16 channels next to each other, but in a laptop form factor, your capacity is already limited, so you can get more wires coming out of the ram by connecting to the face rather than the edge.


Why are you trying to bankrupt Intel??? Without being able to charge 5x as much for Xeons for ECC support, why would anyone ever pony up for one?


I would welcome something like Raspberry Pi compute module, that contains CPU+RAM and communicates with other parts via PCIE. This standard can last decades!

Yet another standard for memory will just fail.



Compared to how big the CPU package is on those machines, 4 of these sticks on each side of the motherboard should fit acceptably.

And you'd be able to have a lot more than 192GB.



Don’t forget - they solder in the flash too even though there is no technical reason to do so.

Unless “impossibly far profit margin” is a technical requirement.



> Don’t forget - they solder in the flash too even though there is no technical reason to do so.

There is, Apple uses flash memory as swap to get away with low RAM specs, and the latency and speed required for that purpose all but necessitates putting the flash memory directly next to the SoC.



Yeah but some people need to justify their $1,800 USD purchase of laptop that comes with only 8 GB of RAM. Even though most laptops manufactured today would also come with NVMe (PCIe directly connected to the CPU, usually) flash storage, which is used by all operating systems as swap.


It's harder to confirm for laptops but you can refer to motherboard manuals to see if any of your PCIe-related slots go through a switch or not. For example, my current PC has a PCIe x16 slot, x1 slot, and two M.2 NVMe slots. It says everything is integrated into the CPU except the x1 slot which goes through the motherboard chipset. I don't see why any laptop would make NVMe go through a PCIe switch unless the CPU doesn't provide enough lanes to support everything supported by the motherboard. Even the at the lowest end, a dual core Intel Core i3-10110U (laptop processor from 2019) has 16 lanes from the CPU which could support at least one NVMe without going through a switch.


On the other hand, with a reflow station everything becomes modular and repairable.

I do hope that a more widespread usage of compressed attachment gives us some development in that area where projects that were promising modular devices failed (remember those 'modular' phone concepts? available physical interconnects were one of the failures...). Sockets for BGAs have existed for a while, but were not really end-user friendly (not that LGA or PGA are that amazing), so maybe my hope is misplaced and many-contact connections will always be worse than direct attachment (be it PCB or SiP/SoC/CPU shared substrate).



> maybe my hope is misplaced and many-contact connections will always be worse than direct attachment

As much as I like socketed / user-replaceable parts, fact is that soldering down a BGA is a very reliable way to make those many connections.

On devices like smartphones & tablets RAM would hardly ever be upgraded even if possible. On laptops most users don't bother. On Raspberry Pi style SBCs it's not doable.

Desktops, workstations & servers are the exception here.

Basically the high-speed parts of a system need to be as close together as physically possible. Especially if low power consumption is important.

Want easy upgrades? Then compute module + carrier board setups might be the way to go. Keep your I/O connectors / display / SSD etc, swap out the CPU/GPU/RAM part.



> On the other hand, with a reflow station everything becomes modular and repairable.

until you hit custom undocumented unobtainium proprietary chips. good luck repairing anything with those.



I'm sure this will find use in Business-Class "Mobile workstations", but having integrated DDR4 in my own hardware, I have a hard time seeing this as the mainstream path forward for mobile computing.

There's lots of value in tight integration. Improved signal integrity (ie, faster), improved reliability, better thermal flow, smaller packaging, and lower cost. Do I really want to compromise all of those things just to make RAM upgrades easier?

And how many times do I need to upgrade the RAM in a laptop, really? Twice? Why make all those sacrifices to use a connector, instead of just reworking the DRAM parts? A robotic reflow machine is not so complex that a small repair shop couldn't afford one, which is what you see if you to to parts of the world where repair is taken seriously. Why do I need to be able to do it at home? I can't re-machine my engine at home. It's the most advanced nanotechnology humanity can produce, why is a $5k repair setup unreasonable?

This is not to mention the direction things are really going, DRAM on Package/Die. The signaling speed and bus widths possible with co-packaged memory and HBM are impossible to avoid, and I'm not going to complain about the fact that I can't upgrade the RAM separately from the CPU, any more than I complain about not being able to upgrade my L2 cache today. The memory is part of the compute, in the same way the GPU memory is part of the GPU.

I hope players like iFixit and Framework aren't too stubborn in opposing the tight integration of modern platforms. "Repairable" doesn't need to mean the same thing it did 10 years ago, and there are so many repairability battles that are actually worth fighting, that being stubborn about the SOTA isn't productive.



>I'm sure this will find use in Business-Class "Mobile workstations", but having integrated DDR4 in my own hardware, I have a hard time seeing this as the mainstream path forward for mobile computing.

Don't know would say the reverse, workstation might need the performance of DRAM on Package/Die, but I don't believe it's the case for mainstream user.

> A robotic reflow machine

Same maybe to service enterprise customer but probably way too expensive for mainstream.

I certainly hope that players continue to oppose tight integration and I'll try to support them. I value the ability that anyone can swap ram and disks to easily upgrade or repair their device more than an increase of performance or even battery life.

I recently cobbled up a computer for a friend's child with component from three different computers; any additional cost would have made the exercise worthless.

联系我们 contact @ memedata.com