Tuesday, February 28, 2023
Watching Suzume again (6th time)
A month has passed since I last watched Suzume, so I thought I would go watch it again today.
Two things that I thought I would write about this time.
1. When Suzume first went to the deserted hot spring town, there was a barrier that blocked the path to the town. She jumped over it like a hurdle. The second time when she went back, that barrier had been moved to the side of the path, most likely by Sota.
2. When Suzume brought Sota back home to administer first aid, there was a scene showing high heels thrown out of their shoe boxes. These shoes were not in Suzume's room, implying that they belong to her aunt Tamaki. They were probably bought by Tamaki but she never had a chance to actually wear them, since she spent her time taking care of Suzume instead of dating and partying. More about Tamaki can be found by reading Suzume—Tamaki’s Story (unofficial translation of 小説すずめの戸締まり~環さんのものがたり~),a short story written by Shinkai Makoto and handed out to movie-goers in Japan.
I will probably catch Suzume again, I am quite sure...
Posted by Teck at 2/28/2023 10:00:00 PM 0 comments
Labels: Movies
Thursday, February 23, 2023
Titanic (25th anniversary 3D re-release)
Titanic... I watched this movie several times (I think five) when it first screened. The movie was originally released in December 1997, and I think I caught it a few times between end 1997 and early 1998. So when I heard that it was being re-released in Japanese cinemas for its 25th anniversary, I was interested in catching it again.
The 3D remastered version started showing in Japanese cinemas on 10 February 2023, and was scheduled for a two week run. Due to work, I had not been able to catch it. Until today, the last day of its 25th anniversary re-release. Phew!
The movie was showing in 3D, IMAX 3D, and Dolby 3D. I was very interested in the Dolby 3D version, but I guess so was everyone else. Because every time I tried to book tickets, it would be fully booked. In the end, I gave up and settled with the 3D version. Guess this makes it six.
Today is a public holiday in Japan, and the show I caught was the first for today, in the morning at 8:50 a.m. Still, the cinema was bustling with people when I arrived, and I soon knew why. The screen, which could house 420 viewers, was full. Everyone was here to see the Titanic. Wow...
As someone who used to work on ships, the shipboard scenes were familiar, and the scenes of the ship sinking were... disturbing. A bit too real. And just like how some of those scenes affected me 25 years ago, I found tears in my eyes this time too. Even by today's standards, I think the visual effects are amazing, which says a lot since this film was released 25 years ago. But it is not just about the visual effects. A good movie is enjoyable, both in sight and sound. A great movie, is memorable because it uses sight and sound to convey a strong story that is enhanced by proper music scoring. I think this is what sets Titanic apart from the rest.
This is definitely a film with rewatch value, although at 195 minutes (that's more than 3 hours!), it is not easy to find the time to keep watching it again and again. Still, I am glad that I made the trip this time to catch this great movie on the big screen again.
Posted by Teck at 2/23/2023 04:11:00 PM 0 comments
Labels: Movies
Sunday, February 19, 2023
Yushui 雨水
After Lichun, or 立春, comes Yushui, or 雨水, another of the 二十四節気 24 solar terms. It literally means "rain water", and signifies an increase in rain.
Posted by Teck at 2/19/2023 01:51:00 PM 0 comments
Labels: Calligraphy
Saturday, February 18, 2023
Dynamic VFIO binding and unbinding for GPU passthrough
Recently, I installed Proxmox with a desktop environment on a HP Z240 SFF computer for the purpose of learning how to do dynamic VFIO binding and unbinding of a discrete GPU for GPU passthrough to a virtual machine.
First, a bit about VFIO binding and GPU passthrough. In order to pass a GPU (or some other PCI device) to a VM, it is necessary that the host computer is not using that device. That device is then binded to VFIO so that it can be passed to the VM. The usual method, given in the Proxmox docs, is to pass the devices to the vfio module at boot. It is a straightforward method, but this means that the device will not be available to the host at all.
(Another disadvantage of the method in the official docs is when you have several devices with the same vendor:device ID pair, and you only want to passthrough some of them. Passing vendor:device ID pairs to the vfio module at boot will mean that all devices with that ID pair will not be available for the host.)
In my case, the HP Z240 SFF has an Intel Xeon E3-1245v5 CPU, which comes with Intel HD Graphics P530 as its integrated GPU. At the same time, I installed a Nvidia Quadro P600 as a discrete GPU in its PCIe slot. Here, I want to use the integrated GPU for the host, and passthrough the discrete GPU to a VM.
1. Make sure that the integrated GPU is used during boot. Usually, there will be a setting in BIOS to select the integrated GPU as the primary video device. This setting needs to be selected.
2. Make sure X11 uses the integrated GPU. This is done by creating a file called /etc/X11/xorg.conf.d/10-i915.conf with the following.
Section "ServerLayout"
Identifier "layout"
Screen 0 "intel"
EndSection
Section "Device"
Identifier "intel"
Driver "modesetting"
BusID "PCI:0@0:2:0"
EndSection
Section "Screen"
Identifier "intel"
Device "intel"
EndSection
Identifier "layout"
Screen 0 "intel"
EndSection
Section "Device"
Identifier "intel"
Driver "modesetting"
BusID "PCI:0@0:2:0"
EndSection
Section "Screen"
Identifier "intel"
Device "intel"
EndSection
Replace the portion in red with the correct PCI slot of your integrated GPU. For my case, running
lspci | grep VGA
gives me
00:02.0 VGA compatible controller: Intel Corporation HD Graphics P530 (rev 06)
01:00.0 VGA compatible controller: NVIDIA Corporation GP107GL [Quadro P600] (rev a1)
01:00.0 VGA compatible controller: NVIDIA Corporation GP107GL [Quadro P600] (rev a1)
My integrated GPU is at 00:02.0, so
BusID "PCI:0@0:2:0"
was what I used.
3. Follow the Proxmox docs to do the necessary kernel boot parameters and such for passthrough. Open /etc/default/grub and change the line containing GRUB_CMDLINE_LINUX_DEFAULT to
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on iommu=pt pcie_acs_override=downstream"
Then, run
update-grub
to make sure the new parameters are reflected.
Create /etc/modprobe.d/kvm.conf with the following
options kvm ignore_msrs=1 report_ignored_msrs=0
options kvm ignore_msrs=1 report_ignored_msrs=0
As the Quadro P600 has HDMI audio, I also created /etc/modprobe.d/snd-hda-intel.conf with the following
options snd-hda-intel enable_msi=1
options snd-hda-intel enable_msi=1
Then run
update-initramfs -u -k all
to make sure the modules are properly reflected.
4. For switching between integrated and discrete GPU, I also installed bumblee using
apt install bumblebee
which will also add a list of modules that will be blacklisted so that Nvidia drivers do not get loaded automatically.
5. Next, I created a directory in the /root directory called passthrough, which will be used to hold various scripts needed.
mkdir /root/passthrough
6. Create the configuration file /root/passthrough/kvm.conf which will hold the PCI addresses of the device to pass through.
## Contents of kvm.conf
VIRSH_GPU_VIDEO=0000:01:00.0
VIRSH_GPU_AUDIO=0000:01:00.1
GPU_VIDEO_DRIVER=nvidia
GPU_AUDIO_DRIVER=snd_hda_intel
VIRSH_GPU_VIDEO=0000:01:00.0
VIRSH_GPU_AUDIO=0000:01:00.1
GPU_VIDEO_DRIVER=nvidia
GPU_AUDIO_DRIVER=snd_hda_intel
7. Create the vfio binding/unbinding helper script called /root/passhthrough/vfio.sh with the following:
#!/bin/bash
set -x
function unbind() {
busid=$1
vendor=$(cat /sys/bus/pci/devices/$busid/vendor)
device=$(cat /sys/bus/pci/devices/$busid/device)
modprobe vfio-pci #>/dev/null 2>/dev/null
if [ -e /sys/bus/pci/devices/$busid/driver ]; then
printf "Unbinding %s (%s:%s)\n" "$busid" "$vendor" "$device"
echo $busid > /sys/bus/pci/devices/$busid/driver/unbind
#echo vfio-pci > /sys/bus/pci/devices/$dev/driver/driver_override
fi
echo -n "$vendor $device" > /sys/bus/pci/drivers/vfio-pci/new_id && echo "$busid -> vfio-pci"
}
function rebind() {
busid=$1; shift
drv=$1; shift
vendor=$(cat /sys/bus/pci/devices/$busid/vendor)
device=$(cat /sys/bus/pci/devices/$busid/device)
printf "Unbinding %s (%s:%s)\n" "$busid" "$vendor" "$device"
echo -n "$vendor $device" > /sys/bus/pci/drivers/vfio-pci/remove_id
echo $busid > /sys/bus/pci/devices/$busid/driver/unbind
echo $busid > /sys/bus/pci/drivers/$drv/bind && echo "$busid -> $drv"
}
cmd=$1; shift || true
case $cmd in
unbind)
unbind $@
;;
rebind)
rebind $@
;;
*)
printf "Usage: %s [unbind <busid> | rebind <busid> <drv>]\n" "$0"
exit 1
;;
esac
Remember to make it executable by
set -x
function unbind() {
busid=$1
vendor=$(cat /sys/bus/pci/devices/$busid/vendor)
device=$(cat /sys/bus/pci/devices/$busid/device)
modprobe vfio-pci #>/dev/null 2>/dev/null
if [ -e /sys/bus/pci/devices/$busid/driver ]; then
printf "Unbinding %s (%s:%s)\n" "$busid" "$vendor" "$device"
echo $busid > /sys/bus/pci/devices/$busid/driver/unbind
#echo vfio-pci > /sys/bus/pci/devices/$dev/driver/driver_override
fi
echo -n "$vendor $device" > /sys/bus/pci/drivers/vfio-pci/new_id && echo "$busid -> vfio-pci"
}
function rebind() {
busid=$1; shift
drv=$1; shift
vendor=$(cat /sys/bus/pci/devices/$busid/vendor)
device=$(cat /sys/bus/pci/devices/$busid/device)
printf "Unbinding %s (%s:%s)\n" "$busid" "$vendor" "$device"
echo -n "$vendor $device" > /sys/bus/pci/drivers/vfio-pci/remove_id
echo $busid > /sys/bus/pci/devices/$busid/driver/unbind
echo $busid > /sys/bus/pci/drivers/$drv/bind && echo "$busid -> $drv"
}
cmd=$1; shift || true
case $cmd in
unbind)
unbind $@
;;
rebind)
rebind $@
;;
*)
printf "Usage: %s [unbind <busid> | rebind <busid> <drv>]\n" "$0"
exit 1
;;
esac
Remember to make it executable by
chmod +x /root/passthrough/vfio.sh
8. Create the script to bind the GPU called /root/passhthrough/bind_vfio.sh with the following:
#!/bin/bash
## Load the config file
source "/root/passthrough/kvm.conf"
systemctl stop bumbleed
systemctl stop nvidia-persistenced
# Unload Nvidia
modprobe -r nvidia_drm
modprobe -r nvidia_modeset
modprobe -r drm_kms_helper
modprobe -r nvidia
modprobe -r i2c_nvidia_gpu
modprobe -r drm
modprobe -r nvidia_uvm
## Load vfio
modprobe vfio
modprobe vfio_iommu_type1
modprobe vfio_pci
modprobe vfio_virqfd
## Unbind gpu from nvidia and bind to vfio
/root/passthrough/vfio.sh unbind $VIRSH_GPU_VIDEO
/root/passthrough/vfio.sh unbind $VIRSH_GPU_AUDIO
## Load the config file
source "/root/passthrough/kvm.conf"
systemctl stop bumbleed
systemctl stop nvidia-persistenced
# Unload Nvidia
modprobe -r nvidia_drm
modprobe -r nvidia_modeset
modprobe -r drm_kms_helper
modprobe -r nvidia
modprobe -r i2c_nvidia_gpu
modprobe -r drm
modprobe -r nvidia_uvm
## Load vfio
modprobe vfio
modprobe vfio_iommu_type1
modprobe vfio_pci
modprobe vfio_virqfd
## Unbind gpu from nvidia and bind to vfio
/root/passthrough/vfio.sh unbind $VIRSH_GPU_VIDEO
/root/passthrough/vfio.sh unbind $VIRSH_GPU_AUDIO
Remember to make it executable by
chmod +x /root/passthrough/bind_vfio.sh
chmod +x /root/passthrough/bind_vfio.sh
9. Create the script to bind the GPU called /root/passhthrough/unbind_vfio.sh with the following:
#!/bin/bash
## Load the config file
source "/root/passthrough/kvm.conf"
## Unbind gpu from vfio and bind to nvidia
/root/passthrough/vfio.sh rebind $VIRSH_GPU_VIDEO $GPU_VIDEO_DRIVER
/root/passthrough/vfio.sh rebind $VIRSH_GPU_AUDIO $GPU_AUDIO_DRIVER
## Unload vfio
modprobe -r vfio_pci
modprobe -r vfio_iommu_type1
modprobe -r vfio
modprobe -r vfio_virqfd
# Reload nvidia modules
modprobe nvidia_drm
modprobe nvidia_modeset
modprobe drm_kms_helper
modprobe nvidia
modprobe drm
modprobe nvidia_uvm
systemctl start nvidia-persistenced
systemctl start bumbleed
## Load the config file
source "/root/passthrough/kvm.conf"
## Unbind gpu from vfio and bind to nvidia
/root/passthrough/vfio.sh rebind $VIRSH_GPU_VIDEO $GPU_VIDEO_DRIVER
/root/passthrough/vfio.sh rebind $VIRSH_GPU_AUDIO $GPU_AUDIO_DRIVER
## Unload vfio
modprobe -r vfio_pci
modprobe -r vfio_iommu_type1
modprobe -r vfio
modprobe -r vfio_virqfd
# Reload nvidia modules
modprobe nvidia_drm
modprobe nvidia_modeset
modprobe drm_kms_helper
modprobe nvidia
modprobe drm
modprobe nvidia_uvm
systemctl start nvidia-persistenced
systemctl start bumbleed
Remember to make it executable by
chmod +x /root/passthrough/unbind_vfio.sh
chmod +x /root/passthrough/unbind_vfio.sh
10. Test the scripts by first running bind_vfio.sh, after which
lspci -v -s 01:00
should show
01:00.0 VGA compatible controller: NVIDIA Corporation GP107GL [Quadro P600] (rev a1) (prog-if 00 [VGA controller])
Kernel driver in use: vfio-pci
Kernel modules: nvidia
01:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
11. Make sure unbinding works by running unbind_vfio.sh, after which
Kernel modules: nvidia
01:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
11. Make sure unbinding works by running unbind_vfio.sh, after which
lspci -v -s 01:00
should show01:00.0 VGA compatible controller: NVIDIA Corporation GP107GL [Quadro P600] (rev a1) (prog-if 00 [VGA controller])
Kernel driver in use: nvidia
Kernel modules: nvidia
01:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
Kernel driver in use: nvidia
Kernel modules: nvidia
01:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
12. If this works, create a hook script
/var/lib/vz/snippets/gpu-hookscript.pl
with the following:
#!/usr/bin/perl
# Exmple hook script for PVE guests (hookscript config option)
# You can set this via pct/qm with
# pct set <vmid> -hookscript <volume-id>
# qm set <vmid> -hookscript <volume-id>
# where <volume-id> has to be an executable file in the snippets folder
# of any storage with directories e.g.:
# qm set 100 -hookscript local:snippets/hookscript.pl
use strict;
use warnings;
print "GUEST HOOK: " . join(' ', @ARGV). "\n";
# First argument is the vmid
my $vmid = shift;
# Second argument is the phase
my $phase = shift;
if ($phase eq 'pre-start') {
# First phase 'pre-start' will be executed before the guest
# is started. Exiting with a code != 0 will abort the start
print "$vmid is starting, doing preparations.\n";
system("/root/passthrough/bind_vfio.sh");
# print "preparations failed, aborting."
# exit(1);
} elsif ($phase eq 'post-start') {
# Second phase 'post-start' will be executed after the guest
# successfully started.
print "$vmid started successfully.\n";
} elsif ($phase eq 'pre-stop') {
# Third phase 'pre-stop' will be executed before stopping the guest
# via the API. Will not be executed if the guest is stopped from
# within e.g., with a 'poweroff'
print "$vmid will be stopped.\n";
} elsif ($phase eq 'post-stop') {
# Last phase 'post-stop' will be executed after the guest stopped.
# This should even be executed in case the guest crashes or stopped
# unexpectedly.
print "$vmid stopped. Doing cleanup.\n";
system("/root/passthrough/unbind_vfio.sh");
} else {
die "got unknown phase '$phase'\n";
}
exit(0);
# Exmple hook script for PVE guests (hookscript config option)
# You can set this via pct/qm with
# pct set <vmid> -hookscript <volume-id>
# qm set <vmid> -hookscript <volume-id>
# where <volume-id> has to be an executable file in the snippets folder
# of any storage with directories e.g.:
# qm set 100 -hookscript local:snippets/hookscript.pl
use strict;
use warnings;
print "GUEST HOOK: " . join(' ', @ARGV). "\n";
# First argument is the vmid
my $vmid = shift;
# Second argument is the phase
my $phase = shift;
if ($phase eq 'pre-start') {
# First phase 'pre-start' will be executed before the guest
# is started. Exiting with a code != 0 will abort the start
print "$vmid is starting, doing preparations.\n";
system("/root/passthrough/bind_vfio.sh");
# print "preparations failed, aborting."
# exit(1);
} elsif ($phase eq 'post-start') {
# Second phase 'post-start' will be executed after the guest
# successfully started.
print "$vmid started successfully.\n";
} elsif ($phase eq 'pre-stop') {
# Third phase 'pre-stop' will be executed before stopping the guest
# via the API. Will not be executed if the guest is stopped from
# within e.g., with a 'poweroff'
print "$vmid will be stopped.\n";
} elsif ($phase eq 'post-stop') {
# Last phase 'post-stop' will be executed after the guest stopped.
# This should even be executed in case the guest crashes or stopped
# unexpectedly.
print "$vmid stopped. Doing cleanup.\n";
system("/root/passthrough/unbind_vfio.sh");
} else {
die "got unknown phase '$phase'\n";
}
exit(0);
Remember to make it executable by
chmod +x /var/lib/vz/snippets/gpu-hookscript.pl
chmod +x /var/lib/vz/snippets/gpu-hookscript.pl
13. If you have not created a VM, create a VM (here, it will be VM 100 for this example).
14. This hook script can then be added to a VM by running
qm set 100 --hookscript local:snippets/gpu-hookscript.pl
Here, 100 is the VM number (which can be replaced with the actual VM number).
15. Now, when you start the VM, it should bind the GPU to vfio, and after stopping the VM, it should unbind the GPU from vfio.
Time to test... so I created a Linux Mint VM on this server. After the post install stuff such as updating the packages, I shut down the VM. Then, on the host server, as root, I added the hookscript using
qm set 100 --hookscript local:snippets/gpu-hookscript.pl
and started the VM.
On the host server, I ran
lspci -vv -s 01:00 | grep Kernel
and got:
Kernel driver in use: vfio-pci
Kernel modules: nvidia
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
Kernel driver in use: vfio-pci
Kernel modules: nvidia
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
Good! The Quadro P600 was being binded to VFIO.
Next, I proceeded to shutdown the VM. After the VM was stopped, on the host server, I ran
lspci -vv -s 01:00 | grep Kernel
and got:
Kernel driver in use: nvidia
Kernel modules: nvidia
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
Kernel driver in use: nvidia
Kernel modules: nvidia
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
Good! The Quadro P600 has been unbinded from VFIO. I could even run nvidia-smi to check that it is available to the host server.
I started the VM again, and installed the proprietary Nvidia drivers from within the VM, rebooted the VM, and could confirm that the Nvidia drivers were loaded correctly.
Hurray! Dynamic VFIO binding and unbinding!
Note: If your GPU has more "devices" like USB and serial and such, just add them as additional variables to the /root/passthrough/kvm.conf files, and edit /root/passthrough/bind_vfio.sh and /root/passthrough/unbind_vfio.sh to add in the commands to bind/unbind these devices.
For example, in /root/passthrough/kvm.conf:
VIRSH_GPU_USB=0000:01:00.2
GPU_USB_DRIVER=xhci_pci
Then, in /root/passthrough/bind_vfio.sh, add:
/root/passthrough/vfio.sh unbind $VIRSH_GPU_USB
And in /root/passthrough/unbind_vfio.sh, add:
/root/passthrough/vfio.sh rebind $VIRSH_GPU_USB $GPU_USB_DRIVER
This should allow you to make sure all the "devices" on the GPU are passed to the guest VM.
References used in developing the scripts and such:
https://pve.proxmox.com/wiki/Pci_passthrough https://www.reddit.com/r/VFIO/comments/s0spr7/using_gpu_passthrough_on_proxmox_but_when_i_shut/ (especially for vfio.sh script!)
https://github.com/bryansteiner/gpu-passthrough-tutorial (for hints on the bind_vfio.sh and unbind_vfio.sh scripts)
Posted by Teck at 2/18/2023 01:35:00 PM 0 comments
Labels: Homelab
Friday, February 10, 2023
Installing Proxmox VE followed by Xfce desktop
In my comparison of ways to install a Proxmox workstation, I stated that one way is to install Proxmox, and then install the Xfce desktop environment, followed by theming Xfce to look like Mint. Given that this is the recommended method in the Proxmox docs, I decided to give it a try.
First, I installed Proxmox the usual way. Nothing special here.
According to the Proxmox docs, the next step is to install X Windows by installing Xfce, a greeter (lightdm), and Chromium. However, I decided to make a slight change, since I wanted to easily install Xfce by using the task-xfce-desktop metapackage.
So, I instead added the non-free Debian repo first. This step is found in a later part of the Proxmox docs. I changed buster to bullseye since Proxmox is now running on Debian 11 instead of 10.
cat <<EOF >>/etc/apt/sources.list
deb http://deb.debian.org/debian bullseye main contrib non-free
deb-src http://deb.debian.org/debian bullseye main contrib non-free
deb http://deb.debian.org/debian bullseye main contrib non-free
deb-src http://deb.debian.org/debian bullseye main contrib non-free
deb http://security.debian.org/debian-security/ bullseye-security main
deb-src http://security.debian.org/debian-security/ bullseye-security main
deb http://deb.debian.org/debian bullseye-updates main contrib non-free
deb-src http://deb.debian.org/debian bullseye-updates main contrib non-free
EOF
deb-src http://security.debian.org/debian-security/ bullseye-security main
deb http://deb.debian.org/debian bullseye-updates main contrib non-free
deb-src http://deb.debian.org/debian bullseye-updates main contrib non-free
EOF
Then, I went back and followed the docs, using task-xfce-desktop instead of xfce4.
apt update
apt upgrade
apt install task-xfce-desktop lightdm chromium
Next was to add a normal user so that I don't have to run things as root.
adduser user
(replace user with the user name you want)
I am doing this on a HP Z240 SFF desktop, which has an Intel E3-1245v5 processor (which has integrated graphics) as well as a Nvidia Quadro P600 discrete GPU. I plan to use this system to play around with dynamic VFIO binding and unbinding. Anyway, I next installed the Nvidia proprietary drivers from Debian repo (which, as of now, is version 470... way way way older than the version 525 I have on other systems).
First, the open-source nouveau driver needs to be blacklisted.
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
Then, the actual driver installation, which requires kernel headers.
apt install pve-headers
apt-get update
apt-get install nvidia-driver
apt-get update
apt-get install nvidia-driver
I then rebooted the system, and was greeted by a GUI login screen, which allowed me to log in as a normal user into the Xfce desktop environment.
I followed my own guide to theme Xfce to look like Mint. This required the installation of Mint packages. First, I needed to add the official Mint repo by creating this file /etc/apt/sources.list.d/official-package-repositories.list with the following contents.
deb http://packages.linuxmint.com elsie main upstream import backport
This allowed me to install the following packages.
apt install mint-x-icons mint-y-icons mint-y-icons-legacy mint-themes mint-themes-legacy mint-artwork
Running neofetch got me this.
Note: Similar to installing Proxmox on top of LMDE 5, I had an error where Proxmox did not recognise the
version codename "Elsie" (which is LMDE 5). It caused the error message
"unknown Debian code name 'elsie' (500)" and so I had to edit /usr/lib/os-release and change
VERSION_CODENAME=elsie
to
VERSION_CODENAME=bullseye This felt really smooth, but I guess that is the advantage of following the official Proxmox docs. Next would be dynamic VFIO binding and unbinding!
Posted by Teck at 2/10/2023 10:11:00 AM 0 comments
Labels: Homelab
Tuesday, February 07, 2023
Comparing various ways to install a Proxmox developer workstation
I recently installed Proxmox on top of Linux Mint Debian Edition 5 and themed the Xfce desktop to get that Mint look. One may wonder, what is the advantage of this process? Especially since the official Proxmox docs contain steps necessary for installing Proxmox as a developer workstation as well as installing Proxmox on top of vanilla Debian.
First, the end goal: a Proxmox developer workstation running Xfce desktop themed to look like Mint.
The options?
1. Install Proxmox, and then install the Xfce desktop environment, followed by theming Xfce to look like Mint.
2. Install Debian with Xfce, then install Proxmox on top of it, and theme Xfce to look like Mint.
3. Install LMDE 5, then install Proxmox on top of it, then install Xfce desktop environment (because LMDE 5 comes with Cinnamon by default), and theme Xfce to look like Mint again. (This is the method I used.)
Method 1 is the "normal" method. Here, the Proxmox features should definitely work, since it is first and foremost Proxmox. You also get a clean and lean install of Xfce desktop, which you can slowly add programs as necessary. One thing to note, though, is that Proxmox usually partitions a smaller base partition for itself, while using the rest of the storage device as a LVM for VMs. If you are going to install a lot of stuff onto the base partition, you may soon run out of space...
Method 2 is the "normal" workaround, since by installing Debian first, you can partition the storage device to your own liking. Installing Proxmox subsequently on a more or less clean install of Debian should mean that Proxmox features should work without issues, since Proxmox is based on Debian in the first place. This method should provide a lean desktop environment that also runs Proxmox.
Method 3 looks like a very complex process to achieve the same outcome. Actually, you get the same basic outcome, but you also gain other features too. For a start, you gain all the extras that come with Linux Mint, as compared to vanilla Debian. For example, codecs and such that may be needed for video playback, and ease of installing proprietary drivers. At the same time, this can be baggage if you don't use them. In my case, I ended up with a Cinnamon desktop and corresponding files that won't be used.
At the end of the day, the bottom line is that all three methods work and deliver the same baseline results. And for Linux-based systems, you can basically achieve whatever result you want, as long as you are willing to spend time on tweaking. Linux is like the diversity champion in operating systems, there is something for everyone, and if not, you can always create your own.
Posted by Teck at 2/07/2023 06:10:00 PM 0 comments
Labels: Homelab
Saturday, February 04, 2023
Watching "Kin no Kuni Mizu no Kuni" (金の国 水の国)
Kin no Kuni Mizu no Kuni (金の国 水の国) actually premiered in Japan last week, but I didn't managed to catch it until today.
The movie itself is 117 minutes, almost two hours. It started with a brief narration about the setting: two countries that have not been on good terms with each other for a thousand years, waging wars countless times. An ancient sign of goodwill is for Country A to send its most beautiful girl to Country B as a wife, and for Country B to send its most intelligent youth to Country A as a husband. Instead, Country A sent a kitten to be the wife of Naranbayar (a young architect from Country B), and Country B sent a puppy to be the husband of Sara (the 93rd princess of Country A, born by one of the king's concubines). Naranbayar and Saya kept their spouses' true identities a secret, knowing that their respective kings would be enraged and go to war again.
Country A is rich due to trade but lacks
water, while Country B has tons of water but is otherwise poor due to
the lack of trade routes. This is why Country A is the land of gold,
while Country B is the land of water. Country B
has a Persian/Arabian look, something that makes you think about
Arabian Nights. Meanwhile, terms of architecture and clothing, Country B
seems to be Oriental, with something of a northern Chinese + nomadic
Mongol look. They have been fighting each other hoping to get what each
needs through force.
As luck would have it, Naranbayar and Sara ended up bumping into each other, and along the way, realised that the other person is the one who was supposed to have received the bride/husband from his/her own country. To keep up the facade, they posed as each other's spouse, and this white lie was used to further propel their stories forward.
I shan't go much further into the story, but it is mainly about a win-win alternative to war. The music by Evan Call alone makes it worthy to go watch this movie. Actually, that was what brought me to the cinema in the first place. Evan Call composed the music used in this movie, as well as the three songs: 優しい予感 (Yasashii Yokan), Brand New World, and Love Birds. All these songs were performed by 琴音 (Kotone) and you can hear them here.
The movie has only been showing for a week, but it probably hasn't drawn much of a crowd, and is not expected to do so, since the local theater is showing it at one of the smallest screens it has. And the movie is just barely into its second week...
I watched the movie at lunch time on a Saturday (it only has two
screenings today, a weekend!) and the smallest screen at the cinema was
not even half filled. Actually, I wonder why. I mean, one of the main character is being voiced by a popular actress, and the movie is produced by a major TV network. One would think that it would be expected to do better at the box office. By comparison, the special screening of Demon Slayer (which is basically episodes 10 and 11 of the Entertainment District arc that aired last year, plus episode 1 of the Swordsmith Village arc that will air later this year) had a ton of people...
Posted by Teck at 2/04/2023 05:40:00 PM 0 comments
Labels: Movies
Lichun 立春
Lichun, or 立春, is one of the 二十四節気 24 solar terms. It signifies the start of spring. This is the first in my series of using the 24 solar terms for calligraphy practice.
Posted by Teck at 2/04/2023 01:48:00 PM 0 comments
Labels: Calligraphy
Subscribe to:
Posts (Atom)