测试来自 datablocks.dev 的两块 18TB 无品牌 SATA 硬盘。
Testing two 18 TB white label SATA hard drives from datablocks.dev

原始链接: https://ounapuu.ee/posts/2025/10/06/datablocks-white-label-drives/

## 家庭服务器存储:重回硬盘与 Datablocks.dev 在使用了多年的全 SSD 家庭服务器后,存储焦虑和高成本促使我探索硬盘选项。由于运费问题,在美国寻找合适的交易并不现实,我发现了 [datablocks.dev](https://datablocks.dev/),一家欧洲供应商,专门销售“无品牌”硬盘。这些硬盘缺乏品牌标识,价格优势明显(比翻新硬盘便宜约 16.7%),尽管可能存在轻微的划痕和较低的使用时长。 我购买了两块 18TB 硬盘,优先考虑易于更换,而不是更大容量但可能更难找到的型号。虽然库存波动很快,但每 TB 的价格约为 13 欧元,具有竞争力。硬盘包装完好,并带有预期的轻微瑕疵。使用 `badblocks` 进行 24 小时测试确认了功能正常,但性能峰值仅为 275MB/s。 将硬盘集成后,创建了一个分层存储系统——SSD 用于对速度要求高的任务,硬盘用于大容量存储/备份。令人惊讶的是,日常性能几乎没有受到影响,只有在重负载下观察到 iowait 略有增加。功耗增加了 10-20W,但仍然可以接受。总的来说,我对这种经济高效的解决方案感到满意,并预计能提供多年的可靠服务。

## 黑客新闻讨论总结:测试白标硬盘和家庭存储 一位用户(“hddherman”)分享了一篇关于测试来自datablocks.dev的两个18TB白标SATA硬盘的文章,引发了关于经济实惠、大容量存储解决方案的讨论。人们对“白标”硬盘的来源表示担忧——它们可能是退回给希捷的不合格产品——以及它们的可靠性。 对话扩展到构建DIY NAS解决方案的策略,用户们争论着较旧的企业级硬件与更新、更节能组件的优缺点。 许多评论者强调了扩展存储到几个驱动器之外的挑战,讨论了诸如存档到单个磁盘、集群文件系统以及企业级设备的成本/效益等选项。 一个关键的争论点是使用USB驱动器进行RAID,并警告了潜在的数据完整性问题以及断开连接触发重建的问题。 软件RAID解决方案,如ZFS,比传统的硬件RAID更受欢迎。 讨论还涉及了SSD作为替代方案,承认了它们的更高成本,但同时也认可了它们在可靠性和功耗方面的潜在优势。 最终,该帖子展示了一个社区深入参与优化家庭存储解决方案,在成本、性能和数据安全之间取得平衡。
相关文章

原文

This post is NOT sponsored, the products were bought with my hard-earned money.

I’ve been running a full SSD storage setup for a few years in my home server and I’ve been happy with it, except for the storage anxiety that I get with running small pools of fast storage, which is why I started looking at how the hard drive market is doing.

Half of tech YouTube has been sponsored by companies like ServerPartDeals, so they were one of the first places I looked at, but they seem to only operate within the US and the shipping+taxes destroy any price advantages from ordering there to Estonia (which is in Europe).

At some point I stumbled upon datablocks.dev, which seems to operate within a similar niche, but in Europe and on a much smaller scale. What caught my eye were their white label hard drive offerings. Their website has a good explanation on the differences between recertified and white label hard drives. In short: white label drives have no branding, have no or very low number of power-on hours, may have small scratches or dents, but are in all other aspects completely functional and usable.

White label drives also have a price advantage compared to branded recertified drives. Here’s one example with 18 TB drives, the recertified one is 16.7% more expensive compared to the white label one, and the only obvious difference seems to be the sticker on the drive. I highly suspect that the white label one is also manufactured by Seagate based on the physical similarities.

The price difference between a recertified and a white label drive.
The price difference between a recertified and a white label drive.

I took some time to think things over and compared the pricing of various drives. The drives were all competitively priced between each other, with the price per terabyte hovering around 13 EUR/TB, so it didn’t matter much which drive size you picked, you’d still get a pretty solid deal. It was also a better deal compared to using an WD Elements/My Book drive of the same size.

I decided to go with two 18 TB hard drives. I considered buying the 20 TB or 22 TB capacities, but decided to go with 18 TB because it’s the largest single hard drive that I can easily and quickly buy a replacement for in the form of a WD Elements/My Book drive.

The stock on datablocks.dev is quite volatile, the drives are in stock when new batches arrive, but they can also quickly go out of stock. I saw this live with the 22 TB hard drives, one day there are 35 left, the next day there can be 7 left, and then only one lone drive.

At the time of writing, the 18 TB model that I bought is out of stock, so my choice to go with a slightly smaller but more easily replaceable one is validated.

For those that have followed my blog for a while will know that I’m a huge fan of all-SSD server builds, especially this one by Jeff Geerling that I still consider building from time to time. If I dislike noise, higher power usage and slower performance, then why did I get the hard drives? It’s simple, really: I now have an actual closet that I can stash my home server in, meaning that noise isn’t that big of a worry, and as long as my home server takes about the same amount of power as my refrigerator or dishwasher, then that’s fine. SSD prices still haven’t gone down as much as I’ve hoped over the years, so the all-SSD build ideas that I have are way outside my budget.

The drives arrived in a reasonable time window. The packaging was adequate, although I was slightly concerned with the cardboard box showing signs of something hitting it hard. The drives were packaged within sealed antistatic bags, and with ample bubble wrap surrounding them.

The cardboard box with a slight dent.
The cardboard box with a slight dent.
Plenty of paper inside to prevent the drives from flying around.
Plenty of paper inside to prevent the drives from flying around.
Drives were wrapped in bubble wrap, with the drives themselves also separated with a few layers of it for maximum
protection.
Drives were wrapped in bubble wrap, with the drives themselves also separated with a few layers of it for maximum protection.
Drives in anti-static bags.
Drives in anti-static bags.

Just as described, the drives did have slight scratches and very minor dents in them, but in all other aspects they looked like new.

One of the hard drives. It does have slight dents and scratches, matching the description.
One of the hard drives. It does have slight dents and scratches, matching the description.
The second drive had a more noticeable bump in it.
The second drive had a more noticeable bump in it.
The backside of the drives.
The backside of the drives.
Those USB-SATA adapters from shucking are really darn handy now. Adapter courtesy of my brother-in-law.
Those USB-SATA adapters from shucking are really darn handy now. Adapter courtesy of my brother-in-law.

Before putting them to use, I formatted the drives using badblocks. It took a full 24 hours to do a full drive write. The write performance peaked at 275 MB/s and slowed down to 123 MB/s at the end, which is expected.

The performance of the drive during the full drive format.
The performance of the drive during the full drive format.

I also had to choose a larger block size for badblocks because otherwise it could not handle the drive, resulting in the command being badblocks -wsv -b 8192 /dev/sdX.

This is what peak jank looks like.
This is what peak jank looks like.

I unfortunately did not save the SMART data from the time I received the drives, but the contents were as expected, there were no more than a few power on hours and other metrics were OK. Keep in mind that it’s also possible to reset SMART data on a drive so this information cannot be taken at face value.

The drives are noisy, as expected. They run at 7200 RPM and do the usual clicks and clacks that a normal hard drive does. If this bothers you, use foam to fix it. The soft side of a sponge can work just as well.

With these drives I’ve now followed my own advice and tiered my storage: two 1 TB SSD-s for the things that benefit from good speed and latency (databases, containers), and 18 TB hard drives for bulk storage, backups and less frequently used data. Coming from an all-SSD build, I expected the performance to drop in day-to-day operations, but in most cases I cannot tell a difference. My family photos load just fine, media plays back well, and backups take slightly longer, which isn’t noticeable due to them running during the night. Only when I look at the Prometheus node exporter graphs do I notice that sometimes the server is waiting behind the disks a bit more due to higher iowait.

During full backups or disk scrubs, the iowait is more prevalent on graphs (the red part), but that doesn't seem to
impact my other workloads in a significant way.
During full backups or disk scrubs, the iowait is more prevalent on graphs (the red part), but that doesn't seem to impact my other workloads in a significant way.
The drives are connected via two WD Elements/My Book USB-SATA adapters, over USB 3.0, and stored right below my ThinkPad
T430, which is proudly running as my home server.
The drives are connected via two WD Elements/My Book USB-SATA adapters, over USB 3.0, and stored right below my ThinkPad T430, which is proudly running as my home server.
I added glue-on rubber feet on the stand to make sure the drives do not accidentally slip off anywhere. It does nothing
to reduce the noise, though, and I'm convinced that it's actually making the noise worse.
I added glue-on rubber feet on the stand to make sure the drives do not accidentally slip off anywhere. It does nothing to reduce the noise, though, and I'm convinced that it's actually making the noise worse.
I'm not proud of the lack of cable management, but this setup works well. Given how often I get new ideas, it doesn't
make sense to organize this too much anyway.
I'm not proud of the lack of cable management, but this setup works well. Given how often I get new ideas, it doesn't make sense to organize this too much anyway.

The power usage did shoot up as a result, roughly 10-20 W. Not ideal, but my whole networking and home server setup is idling at below 45 W, and I’ve had less efficient home servers in the past, so it’s not that big of a deal.

The power usage was elevated while I was formatting and copying files over to the new drives, but after that it's
stabilized to around 1.2 kWh per day.
The power usage was elevated while I was formatting and copying files over to the new drives, but after that it's stabilized to around 1.2 kWh per day.

In this configuration, the drives run quite cool. During formatting on a hot day, I saw them go up to a maximum of 51°C, but in general use they sit at around 38-42°C.

Overall, I’m reasonably happy with the drives. I expect these to last me at least 5 years, and I’m probably going to switch one of the drives out a bit sooner to reduce the risk of a full drive pool failure. They’ve made it the first 50 days, so that’s good!

联系我们 contact @ memedata.com