Wednesday, August 31, 2022

Official piano score of "Though Seasons Change"

When it was announced that the hardcopy of the official score of "Though Seasons Change" is going to be available on-demand, I got myself a copy, and it recently arrived.

Updated 2 May 2023 with details on how to buy the hardcopy outside Japan.

It is basically the same as the electronic version, just printed and bound. The quality of the paper is good, with the hardcopy using glossy A4 paper. The cover (front and back) are printed in colour, and the pages inside in black and white.

Of course, with the electronic version, I could have just found a way to print it out on my own for a hardcopy, but I think paying 3,300 yen was worth it, since the printing is of a good quality and it came bound. I mean, if I did my own printing, it would be stapled together or using some other primitive method.

Now to learn how to play the piano... 😅

Those who wish to purchase a copy overseas can try buying from Buyee, which is the site that I got referred to when I click on "ship overseas" on the Japanese site. It says that Buyee is the official international shipping partner of honto.
First, go to the produce page on honto here.
Add to cart
 
Then, in this area here
 
click this link, which is about overseas shipping.

 
A bubble will appear at the bottom of the screen.
Click on add to cart

 
 This should bring you to Buyee, where you can complete the purchase.

Sunday, August 28, 2022

Pausing VMs to reduce power consumption

My primary homelab server is the Dell Poweredge R720 rack server. On it, there are three VMs that I keep running all the time: a TrueNAS VM that managed my "cloud" storage, a media server VM running Jellyfin, and a Ubuntu server for services like PiHole and such. At the same time, I also run two other VMs on the R720: a Windows VM for work, and a Linux Mint VM for general purpose (web browsing, YouTube videos, and such).

When all 5 VMs are running, power consumption is about 240+W. When the Windows VM and Linux Mint VM are paused, though, power consumption drops to around 190+W. This saves about 50W of electricity.

So I came up with a set of simple scripts that I run from my HP T630 thin client to pause and resume the VMs before going to sleep and after waking up. Also, when the VMs are paused, I prefer the fans to run at a lower speed, so I disable third-party PCIe fan response, and turn it back on only when I resume the VMs. I use the racadm commands I wrote about here.

pausevm.sh
#!/bin/bash
echo "Pause VM 222"
ssh root@pve720.local "qm suspend 222"
sleep 3
echo "Pause VM 333"
ssh root@pve720.local "qm suspend 333"
sleep 3
ssh root@idrac720.local "racadm set system.thermalsettings.ThirdPartyPCIFanResponse 0"
echo "pausevm.sh complete"


resumevm.sh
#!/bin/bash
ssh root@idrac720.local "racadm set system.thermalsettings.ThirdPartyPCIFanResponse 1"
echo "Resume VM 222"
ssh root@pve720.local "qm resume 222"
sleep 3
echo "Resume VM 333"
ssh root@pve720.local "qm resume 333"
sleep 10
echo "Restarting NTP service on vm333.local"
ssh user@vm333.local "sudo service ntp restart"
echo "resumevm.sh complete"


After pausing the VMs, time "stops" for them, so it is necessary to resync the time. For the Linux Mint VM (VM 333), I can do this by restarting the NTP service. For the Windows VM (VM 222), I usually do it manually after using NoMachine to access the VM, although it sometimes sync by itself before that.
 
A bit about terminology. The webUI of Proxmox uses "Pause" and "Hibernate". "Pause" is to suspend to memory; "Hibernate" is to suspend to disk. It is faster to resume a VM when it has been suspended to memory, which is why that is what I prefer to do.

Saturday, August 20, 2022

Changing the GPUs in the Dell Poweredge R720 server

In my first post about GPUs for my Proxmox servers, I mentioned that I need them for:
1. For Jellyfin hardware acceleration of video encoding.
2. For my own "game server".
3. To just run desktops on virtual machines.

Basically, I had been using such a setup on my Dell Poweredge R720 server:
1. Quadro P600 for my media server VM
2. GTX 1050Ti for my Windows 10 VM, which is used for work and light gaming
3. Yeston RX550 for my Linux VM, which I use for everything else

This setup works fine, but I recently managed to get a few newer GPUs, so I have decided to make the upgrades.
1. Nvidia T400 for the media server VM
2. GTX 1650 for the Windows 10 VM
3. Nvidia T600 for the Linux VM

All three of these cards are based on TU117. The only difference is the number of cores and VRAM. The T400 has 384 cores and 2GB of VRAM. The T600 has 640 cores and 4GB of VRAM. The GTX 1650 has 896 cores and 4GB of VRAM. I got them used, but at relatively good prices.

Nvidia T400 (it actually came with the accessories like low profile slot and cables, but without the original box)

Nvidia T600 (came with the original box and all the included accessories)

GTX 1650 (card only)

The reason I got the single slot version of the GTX 1650 was because I had wanted to continue to keep the RX550 inside the R720 server. But in the end, I figured that I will probably never run macOS on this server, so I took out the RX550. It may find its way into the R430 server if I ever need to run macOS.

Actually, I did a bit of shifting for the R430 too. It used to spot the Quadro P400 and GT1030, but I swapped out the GT1030 a while ago for a Quadro P600. Because the lack of hardware encoding on the GT1030 limits its performance as a GPU for a VM running remote desktops.

I may also switch the T400 and T600, to use the T600 on the media server VM and the T400 on the Linux VM, since having 4GB of VRAM is probably more useful for the media server (when transcoding) compared to watching videos and such on the Linux VM. The good thing about running VMs is that it is just a matter of using the Proxmox GUI to change the PCI device being passed through to the VM. I am also keeping an eye out for the Nvidia T1000, which has 896 cores and 4GB or 8GB of VRAM. This has the same form factor as the T400 and T600 (single slot low profile) and will enable the R430 to run a Windows VM that may even work for gaming.

Also, right now, I am hesitant to use any GPU that requires external power (more than the 75W that PCIe can deliver) but I am going to check out the power connector in the R720 to determine its pinout. Subsequently, I may put my GTX 1660 Super in the R720 if I can get a better GPU for my desktop PC.

Update 26 August 2022: Got a T1000 8GB. This is going into the R720, which means the new setup of the R720 will be: GTX 1650 for Windows/Linux VM, T1000 8GB for Linux/Windows VM, and T600 for media server VM. The R430 will have a T400 and a Quadro P600. (The T1000 has the same number of CUDA codes as the GTX 1650, but it has 8GB of VRAM instead of 4GB; however, its clock speeds are lower. I am not sure if I want to use it for my Windows VM, which I use for work and light games, or in the Linux VM that I use for general purpose stuff.)

Update 27 August 2022: Swapped the cards around. The R720 now has the GTX 1650 (single slot), T1000 8GB, and T600. The R430 has the T400 and the Yeston RX550 4GB.
 
I have not done any stress test yet on the R430's two cards, so I am not sure if they can reach their maximum performance, especially since the RX550 is a 50W TDP card but the PCIe slot of the R430 was stated as 25W.

Note to self regarding fans:
As a quick note to self, for iDRAC7 and iDRAC8, it is possible to set the fan offset through SSH. After SSHing into the iDRAC, execute
racadm set system.thermalsettings.FanSpeedOffset 255
to turn off fan offset, or
racadm set system.thermalsettings.FanSpeedOffset 0
for low offset (which is 20% on iDRAC8 and around 35% on iDRAC7)
racadm set system.thermalsettings.FanSpeedOffset 1
for high offset (which is 55% on iDRAC8 and around 75% on iDRAC7).
The offset values 2 and 3 are also available in iDRAC8 for medium (35%) and maximum (100%) offset respectively.

Other RACADM CLI commands include:
racadm set system.thermalsettings.ThirdPartyPCIFanResponse 0
to disable third party PCIe fan response (1 to enable)
racadm set system.thermalsettings.ThermalProfile 0
to set thermal profile, with 0 being default, 1 being maximum performance, and 2 being minimum performance.
 
Previously, I had disabled the third party PCIe fan response because the R720 was in the same room as I was, and the higher fan speeds could be a bit noisy. But now that the servers have been relocated, I think that enabling third party PCIe fan response helps to keep the GPUs a bit cooler.
 
Update 1 September 2022: Some benchmark scores using glmark2
(glmark2 -s 1920x1080)
Nvidia T1000 8GB (on R720 VM with 8 vCPUs from E5-2667v2 and 32GB RAM): 2839
Nvidia T600 (on R720 VM with  4 vCPUs from E5-2667v2 and 32GB RAM): 2613
Nvidia T400 (on R430 VM with 12 vCPUs from E5-2643v3 and 32GB RAM): 1974
Yeston RX550 4GB (on R430 VM with 10 vCPUs from E5-2643v3 and 24GB RAM): 2074
For reference, the GTX 1660 Super on my desktop (AMD Ryzen 7 5700G with 32GB RAM) got a score of 5444.