Thursday, December 30, 2021

Getting a used Poweredge R430 rack server

The rack server journey continues... after getting a R320 and R720, both from the 12th generation of Dell servers, I found a R430 (13th generation) at a reasonable price on Yahoo! Auction.


This one came with dual Xeon E5-2643v3 processors, 32 GB of RAM, and 4 SAS harddisks with 900 GB each. I set up the 4 SAS harddisks in a ZFS pool, which gives me 3.59 TB of storage. The processors are 6 cores 12 threads (total 12 cores 24 threads) running at base clock 3.4 GHz and boosting to 3.7 GHz. I think this processor, for its generation, has the best balance of core/thread count and clock speeds. Yes, there are processors with higher boost speeds, but they usually have lower core counts. Those with higher core counts cannot match the base/boost clock speeds. So while I was thinking of upgrading the CPUs... I think I will stick with them and work within this limit.

Expansion is a bit difficult, though, since there are only 2 PCIe x16 low profile slots. And apparently, limited to 25W. So that rules out most GPUs, unless they are externally powered. But I also found an old catalog that says the R430's PCIe slots are 75W... which is the normal power for a PCIe slot. This got me thinking... could Dell's 25W limit for PCIe slots on 1U servers (2U server's full height slots are rated for 75W, like the ones in my R720) be due more to cooling (thermal limit) than trace width (electrical limit)? After all, expansion cards in rack servers are expected to be cooled by the server's internal fans, and due to air flow inside a 1U server, it only makes sense that the thermal limit is 25W for any passively-cooled card. If it is really a thermal limit, than maybe it is possible to use an expansion card with a higher power rating as long as it is actively cooled. For example, a 40W Quadro P600 which has its own fan. Although a GT 1030 (30W) probably makes more sense.

And if the 25W is really a thermal limit, then there is nothing stopping me from using a PCIe extension cable to connect to a 75W or higher two-slot GPU and place it outside the R430. This is something that I will actually try... once GPU prices become more sane. Right now, trying to get anything better than a GT 1030 is going to break the bank. (I was lucky, I got that GTX 1050Ti for the price of a GT 1030...)

After the R430 arrived, I got about installing Proxmox on it as a hypervisor. There is an internal USB3.0 port for connecting a flash drive for such a purpose, actually. So I got a small external SSD drive and placed it inside the server.

To get to the port, you need to remove the expansion card riser cage.

As you can see, it is actually a tight fit.

With the high boost clock and thread count, I think this will be a video rendering machine. For the time being, I have installed Ubuntu MATE 21.10 and Windows 10 Pro as virtual machines on it. However, only one of them will be running at anytime, to better utilize the high thread count for a single VM. At first, I had wanted to use Windows as the OS for video rendering... but Openshot and Kdenlive are still quite slow in rolling out GPU acceleration support on Windows. So Ubuntu it will be... although first, I have to get my hands on a GT 1030...

After installing the VMs, I thought, hey, Proxmox has a cluster function, why not put the R720 and R430 into a single cluster so that I can manage them together? Well... unfortunately, I found out (the hard way) that you cannot join a node with VMs to a cluster. I set up a new cluster on the R720, which is the main machine. Then I tried to join the R430 to that cluster... and got an error message. Dilemma. Delete the Ubuntu and Windows VMs? Or?

Or backup the two VMs, delete them, join the cluster, and restore the VMs. Which took a bit of time (especially the restore process) but is actually still faster than reinstalling/updating Windows. Yes, spinning up a Ubuntu VM is easy. Windows... is a snail.

Lesson: If you want to have your home servers in a cluster, set up that cluster BEFORE installing virtual machines.

TODO: Install syncthing

No comments: