|
|
The baseline 14" MacBook Pro that costs $1600 has 8GB of shared RAM. That's not enough. I don't believe OP is talking about machines better suited for your task, machines in the $1k range.
|
|
|
Unified memory is basically L3 cache speed with zero copy between CPU and GPU. They have engineering difference. Depends on who you ask, it may or may not worth it |
|
|
That's also nice, but the memory speed is also higher, Ddr5-7266 vs 5600 iirc. The resulting higher bandwidth translates more or less directly into more performance for the iGPU.
|
|
|
My personal understanding - for Thinkpads, it's next year. I guess Lenovo is making real-life testes with P1 here, gather feedback before addressing other families like T14/T14s
|
|
|
The compression connector looks flimsier than a mezzanine so it should probably be a last resort for multi-gigahertz single-ended signaling.
|
|
|
Framework open sources most of their schematics, if I understand correctly. So it should be possible for others to use the same standard, if they wanted to. (they don't want to)
|
|
|
The form factor isn't great for being a vendor-neutral thing. If we can convince the companies to actually try for compatibility, then a revival of MXM is probably a significantly better option. |
|
|
Can it become loose then suddenly not have all pins attached properly? This is something that's unlikely to happen with SODIMM slots, but I've seen so many times when screw receptacles fail.
|
|
|
Surprisingly this is something starting to show up in the server market lately with a new protocol/tech called CXL. But yea that latency issue is still there over the distance but it'll let more remote memory type stuff start to happen. I doubt you'll do more than a few meters (i.e. within the same rack) ever but it'll likely end up getting used for so called "hyperscaler" type companies to more flexibly allocate resources, similar to how they're doing PCIe over ethernet with DPU devices right now. Unlikely that this will end up at the consumer level anytime even medium term because that kind of flexibility is still just so niche but we might see some CXL connectivity eventually for things like GPUs or other accelerators to have more memory or share better between host and accelerator. EDIT: article about a tech demo of it on a laptop actually, hadn't seen this before: https://www.techradar.com/pro/even-a-laptop-can-run-ram-exte... |
|
|
The physical size of the socket and having the connections on the edge means you're forced to have much longer traces. Longer traces means slower signalling and more power loss due to higher resistance and parasitics. This[1] Anandtech article from last year has a better look at how the LPCAMM module works. Especially note how the connectors are now densely packed directly under the memory chips, significantly reducing the trace length needed. Not just on the memory module itself but also on the motherboard due to the more compact memory module. It also allows for more pins to be connected, thus higher bandwidth (more bits per cycle). [1]: https://www.anandtech.com/show/21069/modular-lpddr-becomes-a... |
|
|
> power losses are going to be small compared to the rest of the stack While certainly not the largest losses, they do not appear insignificant. In LPDDDR4 they introduced[1] a new low-voltage signalling, which I doubt they could have gotten working with SODIMMs due to the extra parasitics. If you look at this[2] presentation you can see that at 3200MHz a DDR4 SODIMM would consume around 2 x 16 x 4 x 6.5mW x 3.2GHz = 2.6W for signalling going full tilt. Thanks to the new signalling LPDDR4 reduces this by 40% to around 1.6W. Compare that to a low-power CPU having a TDP of 10W or less a full 1W reduction per SODIMM just due to signalling isn't insignificant. To further put it into perspective, the recent Lenovo ThinkPad X1[3] uses around 4.15W average during normal usage, and that includes the screen. Obviously the memory isn't going full tilt at normal load, but say average 0.25W x 2 sticks would reduce the X1's battery lifetime by 10%. edit: yes I'm aware the presentation is about LPDDR4 yet the X1 uses LPDDR5, just trying add context using available sources. [1]: https://www.jedec.org/news/pressreleases/jedec-releases-lpdd... [2]: https://www.jedec.org/sites/default/files/JY_Choi_Mobile_For... [3]: https://www.tomshardware.com/reviews/lenovo-thinkpad-x1-carb... |
|
|
Why are you trying to bankrupt Intel??? Without being able to charge 5x as much for Xeons for ECC support, why would anyone ever pony up for one?
|
|
|
Compared to how big the CPU package is on those machines, 4 of these sticks on each side of the motherboard should fit acceptably. And you'd be able to have a lot more than 192GB. |
|
|
Don’t forget - they solder in the flash too even though there is no technical reason to do so. Unless “impossibly far profit margin” is a technical requirement. |
|
|
> On the other hand, with a reflow station everything becomes modular and repairable. until you hit custom undocumented unobtainium proprietary chips. good luck repairing anything with those. |