Wednesday, July 28, 2021

Improving the Tellsis language translator: Update 3

Following the previous post, this is to consolidate my thoughts on the way ahead for the app, in terms of OCR.

So far, I have tried to train a Tesseract OCR model using my Tellsis font, but the result is that it can only work on computer generated images (using the Tellsis font) but will fail terribly on real world data (such as actual images from the Violet Evergarden series or handwritten). Given that there isn't a lot of such real world data to use for training data in the first place, it is going to be difficult to pursue this route.

Therefore, I thought of going back to basics. Basically, train a handwriting recognition model using Tensorflow, then deploy it to the Flutter app using TensorflowLite. There is already such a package available for deployment called tflite_flutter.

A quick search using Google revealed the following:
I am looking at this more for deployment as well as generic Tensorflow handwriting recognition training.
 
Similarly, I am looking at this more for deployment as well as generic Tensorflow handwriting recognition training.

Github repo Handwriting Recognition System that uses lines of handwritten text as training data. This looks promising if I can figure out the deployment aspect.

Github repo Siamese-Networks-for-One-Shot-Learning that uses data of single characters for training. This one looks promising for my use case.

I can probably rewrite the Python script version of the Tellsis translator to generate single line or single character output using my Tellsis font. The current Flutter version of the Tellsis translator can also be used to generate sample text, which I can then write out by hand and photograph or scan to use as training data. This solves the issue with training data.

Next is to find the time to do so... 😅
 
Anyone wants to do Tensorflow training on my behalf? 🥰

Saturday, July 24, 2021

Improving the Tellsis language translator: Update 2

I am still working (slowly) on the Tellsis language translator app in Flutter. After trying to train tesseract in the previous post, I thought I would get ready for the UI to allow a file/image picker.

Bumped into a bigger problem. The dependencies (and their dependencies) are very tied to each other, and with the introduction of null safety, some dependencies on pub.dev are just broken until they get updated by their authors. For example, flutter_form_builder is now under major revamp and it is going to be very difficult to compile a working version of the app older versions of flutter_form_builder no longer work as the dependencies are broken. Trying to migrate to the alpha versions was a no-go too since those have broken dependencies too.

Still, it is good to know that there are file and image pickers that used to work with flutter_form_builder, and I am hoping that they eventually get updated to work (aka get out of alpha) so that I can use them.

Anyway, I have released v0.1.2 which added an "About" button and a placeholder button for the one that will eventually be used to load an image. v0.1.3 is the one with the updated packages but it cannot be built on Linux and Windows currently; I managed to compile a working APK for Android, though.

Update July 25, 2021: Small but important update... tesseract packages (like tesseract_ocr) on Flutter currently does not support desktop platforms. Only Android (which I have) and iOS (which I don't have). Testing OCR will take a bit of time. It also means that until someone comes up with a tesseract package that can work on desktop platforms, this feature won't be available for Linux and Windows users.

Friday, July 23, 2021

Catching Violet Evergarden The Movie before it leaves Yokohama again (14th viewing)

Today is the opening ceremony of Tokyo 2020. It is also the last day that Violet Evergarden The Movie is showing at Jack and Betty. Which also means it will be leaving Yokohama (again).

Between Violet Evergarden The Movie and the opening ceremony of Tokyo 2020, the choice is clear.

I dutifully booked my ticket a few days earlier, then took Vivi to the cinema. The journey was just short of an hour but it was worth it, even in the blazing sun.



In my post on the 13th viewing, I mentioned about the flowers in Yuris' room and how it might have been changed. On confirmation today, no, the flowers are still the same. Yellow flowers by the window with orange roses in the room during the first visit by Violet; yellow flowers by the window with white roses in the room during Violet's second visit (when she talked about Yuris' letter to his younger brother); and pink flowers by the window with wilting white roses in the room on Violet's third visit, when she sealed the three letters.

This is the 14th time I am watching the movie, but it still brings tears to my eyes. Maybe even more tears. I still cry when the movie opens with Ann's house and the letters from her mother Clara. About wishes that cannot come true. The scenes with Yuris. Even the ending credits (especially the ending credits).

I wonder if the movie will return to Yokohama again...

My overall thoughts on Violet Evergarden The Movie
Events:
 
Translations of short stories:
Gilbert Bougainvillea and the Fleeting Dream (unofficial translation of "ギルベルト・ブーゲンビリアと儚い夢")
The Starry Night and the Lonely Two (unofficial translation of 星降りの夜とさみしいふたり)
Diethard Bougainvillea's If (unofficial translation of ディートフリート・ブーゲンビリアIf) 
The Tailor and the Auto-Memories Doll (unofficial translation of 仕立て屋と自動手記人形)
 
Tellsis (Nunkish) translation:
Last line of Violet's final letter to Gilbert
 
Insights on the movie:
 
Audio commentary notes:

 
All posts related to Violet Evergarden.


Wednesday, July 21, 2021

Visiting nearby shrines

With Vivi, it is easy to get around the neighbourhood, so I thought I would visit a few nearby shrines.

The first stop was Goreisha (五霊社).


There is a monument commemorating local residents who died during World War Two.

Next was the nearby Hachiman Shrine, but I got a bit lost and found this instead.

Finally, I got to Hachiman Shrine (八幡神社). Or rather, the torii that leads to it.
 
The shrine itself is up a flight of stairs in a bamboo forest.


Finally, I found the shrine.


I paid my respects at both shrines, asking the gods to watch over me. Then, I made my way back in the blazing sun...

Tuesday, July 20, 2021

Thoughts on freedom and the prolonging of the COVID-19 pandemic

Every time I look at the news, I am reminded about how the concept of freedom is contributing toward prolonging the COVID-19 pandemic.

I am not pro-authoritarian, I believe in freedom and liberty. But I also believe that with freedom comes responsibility.

Yet the concept of freedom has been weaponised by the government of the United States during the Cold War. In realist terms, the Cold War was a contest for hegemony between two superpowers. But it was turned into a war of ideology, marketed as a contest between freedom and authoritarianism.
 
And when the United States came out as the winner in the Cold War, people mistook that it was freedom that led the United States to win over the authoritarian USSR. But the real reason for U.S. victory is actually much more complex with more factors than one. Still, this resulted in freedom being put on a pedestal.
 
It resulted in people believing their own propaganda.
 
So today, we have people who believe they are free to act as they want. They think they know better than the experts. They ignore the calls for vaccinations, maintaining distance, and wearing of masks.
 
But modern society is based on collective knowledge. We humans have evolved to the top of the food chain on Earth because we have developed methods for extensive specialisation and knowledge sharing. This is because we have limited lifespans and can only acquire so much knowledge. Instead of basing every decision purely on what I know from my own experiences, we have evolved to what we are today by drawing on collective knowledge in our decisions. That collective knowledge is science.

Science says vaccines, physical distance, and the wearing of masks help to prevent the spread of COVID-19. If we believe in science, we should act in accordance with this belief and take the proper measures, as prescribed by science, against COVID-19. Which means getting vaccinated (if you live in a place where you have access to vaccines), maintaining physical distance from other people, and wearing masks.

It is hubris to think that we know better than the experts. It is hubris to ignore what the experts say about vaccines, physical distance, and masks. It is hubris that will be our undoing.

And that is exactly what is happening. When we ignore the calls for distance and masks, the virus mutated into more dangerous strains because a higher rate of infection also means the virus is able to mutate at a higher rate. The more mutations, the more likely it is to mutate into something that spreads more easily or becomes more fatal.

What makes it even more dangerous is that, with widespread infection, the chances of someone who has been vaccinated becoming infected again increases. With enough such cases, the virus may mutate into strains that can overcome the effectiveness of vaccines, rendering vaccines even less effective. This can become a vicious cycle as we race to develop vaccines that are more effective against mutated strains.

[Sidenote: Vaccines do not prevent a person from catching the virus. It helps to artificially build up a person's immune system against a virus. So when the person catches that virus, the immune system is able to deal with the virus in a shorter period of time. Which means the virus has less time to duplicate (and thus mutate) and cause damage to a person's body. The result is that a person may not develop a virus count high enough to be classified as "infected". Or even if a person develops a virus count high enough to be classified as infected, the immune system is able to quickly reduce that count, which means a person recovers faster and usually with much lighter symptoms.]

Heeding the words of experts (vaccines, distance, masks) against following our own desires (go out, drink with friends, don't wear masks) has nothing to do with the infringement of freedom. The collective action of protecting people's lives in society is what living together is about. We live as groups because when everyone works together in a group, each playing our part, we all live safer and longer lives and are able to better utilise that to do things that we want. Freedom and responsibility come as a set. A person who wants to be free must do his or her part for society, because it is society that offers that person his or her freedom.

Those who seek true freedom know that heeding science will bring about freedom, not limit it. I only hope hubris does not blind the eyes of people.

Sunday, July 18, 2021

Violet Evergarden The Movie back in Yokohama (13th viewing)

For one week, Violet Evergarden The Movie will be showing again in Yokohama, at a small cinema called Jack and Betty. There are two screens here, one is Jack (capacity of 96 persons), the other Betty (capacity of 115 persons). The movie is showing at Betty.
(Note: There are spoilers toward the end of this post, please stop reading if you haven't watched the movie. Not really much of a spoiler, but I thought I would put a warning anyway.)

Which meant it was time to book tickets and watch it again. For the 13th time.

This time, I took Vivi there instead of the train.

It is so nice to see the posters back in a cinema.


There is even a small corner in the cinema introducing the movie. But it was a really small corner and I didn't have time to scrutinise everything since that would mean hogging the place and preventing other fans from taking a look.

 
It is a limited one-week run, with only one show per day. I got my ticket online, and even then, the cinema was already half-full. By the time the cinema opened its doors for us to enter before the show, there was already a crowd of fans, and it turned out to be a full house.

Anyway, the staff at the cinema told me that I can park right in front of the cinema, just beside the stairs. Which was what I did (instead of along the road, across the cinema, which could result in Vivi being towed away).

Although Violet Evergarden The Movie was showing at Jack and Betty since yesterday, I chose today because it is the second anniversary of the KyoAni arson attack. In addition to viewing the online memorial service this morning, watching Violet Evergarden The Movie (which has the names of the victims in its credit roll) is my way of paying my respects to the victims.

I don't know if I have touched on this, but the dialogue between Gilbert and Violet at the sea (their face-to-face reunion) in the movie actually borrowed from the light novel, when Gilbert and Violet reunited during the train attack. Just like the letters to Ann, which were excerpts from the letters in the light novel.

Also, I previously mentioned that Violet visited Yuris three times because of the pot of yellow flowers (which could have been a drawing mistake; that means Violet visited only twice). I think KyoAni fixed the flowers, because I don't think I saw yellow flowers this time. Either that, or I was crying too much to notice. Plus it wasn't a Dolby Cinema, so the colours won't as vivid, the sound not as clear... yes, once you have watched the Dolby Cinema version of Violet Evergarden The Movie, watching it at a normal theatre just doesn't give the full experience anymore.

This coming Friday, 23 July 2021, Tokyo 2020 will open... meanwhile, I am thinking if I should catch the movie again... 😅 (Update: Got my tickets for Friday...)

My overall thoughts on Violet Evergarden The Movie


Events:
 
Translations of short stories:
Gilbert Bougainvillea and the Fleeting Dream (unofficial translation of "ギルベルト・ブーゲンビリアと儚い夢")
The Starry Night and the Lonely Two (unofficial translation of 星降りの夜とさみしいふたり)
Diethard Bougainvillea's If (unofficial translation of ディートフリート・ブーゲンビリアIf) 
The Tailor and the Auto-Memories Doll (unofficial translation of 仕立て屋と自動手記人形)
 
Tellsis (Nunkish) translation:
Last line of Violet's final letter to Gilbert
 
Insights on the movie:
 
Audio commentary notes:

 
All posts related to Violet Evergarden.

Second anniversary of Kyoto Animation arson attack

Today is the second anniversary of the horrible arson attack at Kyoto Animation (KyoAni), which resulted in 36 fatalities and 33 injured.

At 10:34 Japan time, Kyoto Animation streamed a video in memory of the victims.
(Note: This video will only be available on July 18, 2021.)

I have translated the contents below since I don't want to reproduce the video here without permission.
 
<The video will start at 10:34. Please wait a while more.>

Two years have passed since July 18, 2019.
We were preparing to create an opportunity for everyone to gather in memory of those departed, but due to COVID-19, we decided to mourn using this video. We hope you understand.

<Observe a minute of silence>
<End of a minute of silence>

Two years ago on this day, we suddenly lost comrades who made anime together with us.
Nothing can wash away the pain, and we continue to feel the strong presence of our comrades even as time flies by.
In these two years, we never forgot the treasured memories we had with them.

Our condolences.

For this memorial service on the second anniversary, we have received messages from affiliates, family members, and our staff.
Please allow us to introduce some of these messages without the names of their authors.

From an affiliate:
As we go about our work, your names naturally turn up in our conversations. Even today, we continue to create works together with you.We can no longer talk face to face, but we can talk through our hearts. "What should I do in this case?" "How should I deal with this problem?" Such words come naturally.
Even today, your passions continue to live on in us. Your presence and the spirit you left behind. They continue to support us today. Please continue to watch over us. We will do our best to create works that you can be proud of.

From a family member:
The life that I thought would continue forever, your future full of dreams--two years have passed since the day when everything disappeared.There is no day when I did not think about you.Together with your smiling face comes a great sense of loss.Sad, lonely... the tears do not stop no matter how much time has passed. Love only grows.
Since that day, when the rain suddenly stops, when light shines through the cloudy sky, when a refreshing breeze blows by, I always think it is a miracle brought by you.I still want to see you. I just want to see you. I want to see you.I know it is impossible, but I have been hoping since that day.
Thank you everyone for your thoughts.

From a staff (1):
All of you continue to be with us, encouraging and supporting us, even as the seasons change, at every corner, at every moment in life.When I hit a wall, I motivate myself by remembering your determination. On days of clear, blue skies, it warms my heart to know you are happy. Memories of days spent with you flow into my mind when I stand at places shared with you.
Your passions will continue to be delivered to people around the world through your works. No matter how much time goes by, our ties will not change as we weave our thoughts.Please watch over us.

From a staff (2):
Two years have passed since that day. Nothing can soothe the loneliness and sadness. Even today, memories of you, your voices, come to mind.Days spent polishing techniques, days spent working on anime. It is heartbreaking to know that our important comrades, whom we believed would always be around to create the future together and deliver excitement to everyone in the world through our works, are no longer here today.
Let me shout from the heart: I want to see all of you! There is so much more to talk about.We can no longer hear you directly, but your thoughts continue to be with us. Let us continue to create anime together as we converse in our hearts. We will always be comrades.
 
<end of messages>
Our sincere appreciation for your continued support.
We will continue to create works, and deliver excitement and hope for the future to everyone through our works.
We look forward to your support.
 
July 18, 2021, Kyoto Animation
 
<End of video. Thank you.> 

List of 35 victims who appeared in credit roll of Violet Evergarden the Movie
 
News article:

Update: The actual video from KyoAni was only available on 18 July, 2021. The following is a news clip about the memorial service that was held at the site of where the studio used to be.



Friday, July 16, 2021

Belle (竜とそばかすの姫)

Update August 1, 2021: Updated this post after watching the movie for the second time.
 
Belle (竜とそばかすの姫, Belle: Ryū to Sobakasu no Hime) was released in Japan's theatres today.

Somehow, I have made it a habit to try and catch a new movie on the day it opens in theatres. This was no exception.

Those who have watched Summer Wars will recognise some of the similarities in how the online world is portrayed. Both movies started with an introduction of the online world. Speech bubbles are used in the same way. There is also the "traditional" whale which is a familiar object in Hosoda Mamoru's films. Also, the director himself said that this movie took reference from Disney's animated film Beauty and the Beast. I must admit, the similarities are there. The beast, the singing beauty, the castle, even a ballroom scene and roses.

It was as if Director Hosoda took Beauty and the Beast and brought it into the 21st century. Instead of magic, we have the Internet. And themes that resonate better with the people of this age. If Beauty and the Beast was a story to help young girls back then accept their fates of being married to unknown older men, then this new film, with its portrayal of the issues faced by teenagers of this age, is here to help the younger generation of our times cope better with their problems. As someone how loves the animated Beauty and the Beast, it is no wonder that I love this fresh new story about Belle and the Beast (aka Dragon, or Ryu 竜).

The movie also touched on real issues. Like the dilemma of social justice warriors. Or the problem of doxing. It didn't delve too deep into them, but just enough to set viewers thinking. The film isn't here to give an opinion. I think it wants us, the viewers, to form our own opinions by raising awareness on these issues. It is not here to tell us what to think, but here to tell us to think.
 
Another theme is that of a hero who sacrifices himself/herself to save someone else. It is not about dying to save someone; it is risking one's life to save someone. Is it worth it to risk a life to save another? What about the loved ones of the hero, who may get left behind because of the hero's decision? At the same time, can we stop our urge to want to help someone else? Knowing that there is no one else who can help but us?

My first impression of the movie is: Hosoda Mamoru has taken a Disney movie and came up with something that is better than any Disney movie so far. The age of the Mouse is over. "Move over; this is the age of Hosoda Mamoru." It is that good, in terms of the writing (screenplay and song lyrics), animation (use of traditional animation and 3D rendering), music (oh, the singing...😍) and the themes. It is Hosoda Mamoru's letter of challenge to the Mouse. "Do better, if you can." The movie is like a musical, and really needs to be enjoyed in a theatre. I really hope they come up with a Dolby Cinema version of the movie. But even if not, I just might go watch it again for the songs.

The main character, Suzu (aka Bell), is voiced by singer Nakamura Kaho. This means that the dialogues and songs flow into each other very naturally, and Nakamura Kaho is a very good singer too. The song sang by Suzu when she first entered the online world of U brought tears to my eyes because of what it touched on and how it was performed. It was as if the movie was written for the voice actor/singer.
 
Now to see if Shinkai Makoto comes up with something better... 😅

Update July 17, 2021: Saw this article today and thought I would share it.

Saturday, July 10, 2021

Improving the Tellsis language translator: Update 1

I mentioned that I want to improve my Tellsis language translator app to include an image picker and OCR. So I spent last night trying to train Tesseract to recognise Tellsis alphabets.

First, why Tesseract? Because I read somewhere that Tesseract can be trained to recognise a new font, so it should be easy for me to train it to recognise Tellsis, or so I thought. The usual method for learning a new font in Tesseract is based on a trained model. For example, using the model that has been trained to recognise English alphabets, a set of training data is created using the new font and training is conducted to fine tune the existing English model. Using the script here as a hint, this was what I tried to do.

But it didn't work that well. Because the model is already trained to recognise English, and the character for "U" in Tellsis looks like an "O" in English. So the model will forever recognise "U" for "O" instead. Not good.

So I need to train a model from scratch. I used the instructions here to tweak my script to train from scratch. This took some time, but I was able to get satisfactory results... except for some characters which end up being capitalised when they should remain as small letters. This is supposed to be the hallucination effect. Which I think is caused by me using English text to create the training data. The better way is to find text in Tamil script, convert that to unaccented English alphabets, then carry out the substitution cipher to obtain text in Tellsis. The Tellsis text can then be used to create training data, and the resultant model should be able to avoid the hallucination effect.

Problem is, I don't know how to convert Tamil to unaccented English alphabets.

One way is to use the existing Tellsis language translator and adapt it to read an entire file (English text). Then translate that file into Tellsis, save it, and use that new file to generate training data. This sounds like more work again... and I slept in the wee hours last night trying to figure out how to train Tesseract, so I am a bit sleepy now...

Anyway, as for selecting an image or camera capture, image_picker does not work on Flutter Desktop, so I will need to use the universal_platform package to identify the platform, and run image_picker on Android and file_selector on Linux and Windows. Another challenge for another day...

By the way, I don't own any Apple products, so I cannot develop for iOS or MacOS. The only time I have used an Apple product was many years ago in school when we found an old Apple IIe in a corner of the computer club room and plugged it in to find that it could actually still boot up.

Back to the topic of Tesseract. I am trying to train a model from scratch using two fonts. This is the script. (I am still playing around with the script, so some parameters may change over time.)

# Remove the previously generated training data
rm -rf train/*

# Generate training data
MAX_PAGES=100
NUM_ITERATIONS=10000
cd src/training
./tesstrain.sh --fonts_dir ~/github/tesseract/fonts --fontlist \
  "Automemoryfont" \
  "TellsisTyped" \
 --lang eng --linedata_only --langdata_dir ~/github/tesseract/langdata_lstm --tessdata_dir ~/github/tesseract/tessdata \
 --maxpages $MAX_PAGES \
 --output_dir ~/github/tesseract/train
cd ..
cd ..

# Train the model from scratch
rm -rf output/*
OMP_THREAD_LIMIT=8 ./lstmtraining --debug_interval -1 \
  --traineddata ~/github/tesseract/train/eng/eng.traineddata \
  --net_spec '[1,36,0,1 Ct3,3,16 Mp3,3 Lfys48 Lfx96 Lrx96 Lfx256 O1c111]' \
  --model_output ~/github/tesseract/output/telsis --learning_rate 20e-4 \
  --train_listfile ~/github/tesseract/train/eng.training_files.txt \
  --eval_listfile ~/github/tesseract/train/eng.training_files.txt \
  --max_iterations $NUM_ITERATIONS
#  --max_iterations $NUM_ITERATIONS &>~/github/tesseract/output/basetrain_typed.log

# Combine the checkpoints and create the final model
./lstmtraining --stop_training \
  --continue_from ~/github/tesseract/output/telsis_checkpoint \
  --traineddata ~/github/tesseract/train/eng/eng.traineddata \
  --model_output ~/github/tesseract/output/telsis.traineddata

cp ~/github/tesseract/output/telsis.traineddata ~/github/tesseract/tessdata/telsis_typed.traineddata


Update July 26, 2021: I have worked on the app to include this trained model, so right now, the app (v0.1.4_alpha) has OCR capabilities. But the trained model is based on the font I made, so it has very poor performance on actual text. If anyone wants to work on training data for the model using text found in the anime, please feel free to use the trained model here to improve it. As the tesseract_ocr package only works on mobile devices, this feature has only been tested on Android (I don't have an Apple product).

Friday, July 09, 2021

Planned improvement to Tellsis language translator: image picker + OCR

Just a quick post on how I intend to improve my Tellsis language translator.

Currently, the app is able to translate to and from Tellsis based on text entered by the user. Text in Tellsis (either the source or the target) is displayed in the Tellsis font. This is great when translating to Tellsis, since you can see the resulting Tellsis characters right away. The reverse is tedious, though, since the user needs to decode the Tellsis characters into English alphabets before it can be entered as the source text.

So the next step is to use optical character recognition (OCR) to automatically extract the Tellsis characters from an image and output that in English alphabets.

For OCR, I intend to use the tesseract_ocr package. It will be necessary to create a trained model that can "read" Tellsis characters, and I think the scripts in this repo (tesseract-training) should be able to help.

As for selecting an image, the image should either come from the gallery or camera, and the image_picker package is just the right tool for this job. I even found a tutorial/example article here on how to use image_picker.

The actual changes will be to add a "Select image" button to the app's main screen. When an image has been selected (either from the gallery, or taken with the camera), the image will be passed to the OCR routine to extract the Tellsis characters and output them as English alphabets. This input will also be displayed in the main screen where the Tellsis alphabets usually gets displayed. The user can then press "Translate" to translate to the target language.

Now to find time and motivation to actually work on this improvement... 😅

Update July 26, 2021: Image picker added in v0.1.4_alpha, which can be found here.

Thursday, July 08, 2021

The impact of autonomous driving

A lot of research is being done on autonomous driving, and the day will soon come when we humans can just tell our car the destination and it will bring us there. With the development of vehicle control technology, computer vision, map following, and inter-vehicle communication, getting a car to actually drive automatically to a destination is already possible. The real challenge is how to do so safely in a crowded environment when there are other vehicles and road users.

This decision-making aspect of driving is the difficult part. People have brought up the ethical issues, which will also become legal issues. The classic question has been: should the car swerve and kill pedestrians to save its passenger from a collision (and possible death) or should it collide to avoid killing pedestrians (and possibly killing its passenger instead)? Who to save, who to kill? This ethical aspect of decision-making will need to be part of the autonomous driving system, and I frankly don't think anyone is ever going to have a good answer for this. At the end of the day, whatever the final decision may be, the next question is: who is responsible for damage/injury/death caused by an autonomous vehicle? Is it the passenger, the owner, or the manufacturer?

Obviously, manufacturers are not going to want to take such responsibility. They make millions of cars. If they are held liable for every accident, it won't take long to bankrupt them. Yet at the same time, not having some form of liability on the manufacturers may result in manufacturers being more willing to take development and production risks in churning out products that may fail to meet ethical standards. Why spend billions on developing a proper decision-making engine if you do not need to take responsibility for the poor decisions made by that engine? But if we do hold manufacturers responsible, then it deters manufacturers from working on autonomous driving in the first place, since it is the driver who takes responsibility in a car driven by a human.

Should the owner then be held responsible? You might say that it is the owner who chose to buy that car, so the owner will need to be responsible for that decision when the car kills or injures someone. But as cars become more advanced, I don't think owners will be able to fully understand how these autonomous cars make decisions, and it will be unrealistic to hold car owners responsible for the car they choose to buy.

Okay, then how about the passenger? The current method of development has a "driver" who is ready to respond in an emergency, taking over from the car's decision-making engine. In the future, we may expect passengers to fulfill that role--to take over in an emergency. There are two problems, though: reaction time and skill. Will the passenger be able to react in time to take over from the autonomous driving system? This requires the passenger to be fully focused on the road and be capable of driving, which defeats the purpose of having autonomous cars. And even if the passenger is fully focused on the road, can we expect he or she to have the skills necessary to handle a car in an emergency? After all, driving is a skill, and the passenger is likely to be suffering from a severe lack of practice if cars drive themselves 99.99% of the time. It may be even more dangerous for an out-of-practice driver to be trying to handle the car in an emergency.

Responsibility must lie somewhere, else we will end up with substandard cars that endanger the lives of people and substandard passengers who cannot handle cars in an emergency. And when we have reached an ethical/legal conclusion, we need to remember the longer-term impact on society. When autonomous cars have entrenched themselves in our lives, we humans will not longer have the skill to drive. It is that simple. We will be relying on machines to get us around. While autonomous driving opens up possibilities for people who may not be otherwise able to drive (like the elderly or disabled) and lets them move around, reliance on autonomous driving means that human mobility in the future for everyone (young and old, disabled or not) will be limited by what machines can do.

So yes, autonomous driving sounds great in the short term. But the long-term impact is that human movement will be limited by what machines can provide. If your car refuses to drive off the paved road to follow a forest trail, you either walk or give up on going down that trail. Are we ready for such a future?

Thoughts on freelance work in Japan

This post came about because of the recent tweet by animator Mushiyo which triggered reaction from anime fans about the working conditions at MAPPA and Japan's anime industry in general.


MAPPA has since put out an official statement rebuking these claims, and hinting at legal action against animators who spread such claims.

As a freelance translator in Japan, I can fully understand what the freelance animators here are going through. The freelance industry here in Japan is essentially a loophole for companies to hire workers at low salaries without having to pay other benefits such as insurance and pension. Such benefits can amount up to a a third of the actual salary being paid to regular employees, depending on age. Freelancers, however, do not receive such benefits; but they are legally required to pay for insurance and contribute toward pension, which means that up to a third of the meager sum being earned is further taken away from them. Freelancers are also not paid bonuses; even if they do get bonuses, it is a small token sum compared to what regular employees receive.

This become quite unfair when you think about how the company's business is being supported by the work of freelancers. I mean, an anime studio can't produce anime without animators. A translation agency can't produce translations without translators. The core workers that support the operation of a company are actually being paid the least.

Instead, freelancers compete with each other for work, and run the risk of being sidelined by a company forever if they do not accept work at unreasonable prices. It is because of this "internal competition" that companies are able to get away with exploiting freelancers. The availability of part-time freelancers (who have a full-time job doing something else and take on freelance work as additional income) does not help, because these part-time freelancers are usually able to accept work at lower prices because they already have a stable income. But the overall effect they have is to drive down prices in the industry as a whole, jeopardising the livelihoods of those who work full time.

The unfair power balance of companies is a factor that always looms over our heads. Freelancers are forced to sign binding agreements with companies; such agreements can be used by company lawyers to hang an axe over our heads all the time. Every freelancer runs the risk of being sued till bankruptcy for any mistake. And as mentioned, other than this legal risk, there is always the risk of being sidelined. Companies have a pool of freelancers, and if we refuse to take on work (for whatever reason, be it price or because we are sick), the companies may stop approaching us for work because they can always find someone else. What this means is that freelancers have to work, whatever the price, whatever their health may be.

I was told by someone that quality, time, and price: choose two because it is not possible to have all three. Here, freelancers are expected to deliver good work in short times at low prices. If you don't deliver in time, it is a breach of contract and you may be sued. If you do not deliver quality, it is a breach of contract and you may be sued. If you don't accept low prices, you don't eat.

It is no wonder that animators are suffering. It is no wonder that we have seen recent series run into trouble with schedules and quality. If companies exploit the people who actually bring in their profits, the long-term impact is a stagnation of the industry as a whole, as freelancers burn out and leave the industry only to be replaced by newbies, who go through the same process again and again.

But this is not an issue caused by companies. It is a much deeper problem arising from the entire industry structure, where companies are expected to deliver quality work in short time at low cost. The entire Japanese economy runs on such expectations, which is why we keep seeing systemic problems in many different industries. We have the fudging of seatbelt and fuel efficiency data in the automobile industry. There are construction companies who take shortcuts in the sourcing of construction material. There are television producers who stage interviews.

At the root is simply the fact that Japan is no longer the bubbling economy it used to be. Society's expectations are stuck at a time when they had the money to pay for good quality in short time. People continue to expect that same quality in the same short time; the problem is, they don't have that kind of money anymore. Companies are being driven to cut costs to survive, which ends up creating a vicious cycle of being unable to create true value and thus bring in real profits. The end result is that while corporate Japan tries to keep up appearances, the freelance industry bears the burden of such efforts.

So will the recent spotlight shone on the anime industry help change the situation? I am skeptical. The government here has been talking about giving better treatment to freelancers, temporary workers, part-timers, etc. but there has not been any concrete progress because of the deeper problems with the economy. It will take someone with exceptional political conviction and power to be able to pull off such an extensive reform of Japan's economy and society.

Until then, I will just keep working, day or night, healthy or sick. Because the alternative is bleak.

Friday, July 02, 2021

Kyoto Animation to hold memorial service on July 18, 2021

Like last year, Kyoto Animation will be holding a memorial service from 10:30 to 10:40 for those who passed away in the fire of July 18, 2019.

Official press release from Kyoto Animation
 
As usual, the public is being asked to refrain from visiting the site of the fire on the anniversary itself as well as the days prior and after the anniversary.
 
The memorial service will be broadcast live on KyoAni's YouTube channel, and the video will also be available for the rest of the day.

Let's observe the request from KyoAni and honour the deceased together online.

Update July 17, 2021: Kyodo News has an aerial shot of what the site of the fire looks like now.