Ignorance
This morning I was taking a run around the block of our parent’s home. I’m a terrible runner, I just do it for my health and destressing. It also gives me an opportunity to go outside.
The weather was cloudy and the air cool and humid. I was going on my casual run with my wireless earbuds on listening to music by Daughtry from the 2000s, when I noticed a little something.
They’re earthworms! The picture below might look fairly uneventful, but there are gobs of earthworms passing from one side of the grass to the other.
Actually a lot of these earthworms are dark and flat, I think they were dead. Cause of death unknown, maybe the concrete is just trecherous and impossible to pass for these poor earthworms, or maybe runners and bikers squished these earthworms while doing their own thing like I was initially, completely disengaged from their surroundings.
Noticing these earthworms got me a lot more sensitive about squishing them. I thought these were earthworms, but they just turned out to be sticks.
The picture below is also not an earthworm, but something from a tree. Also, there’s a roly poly passing by if you look carefully.
That got me reflecting and thinking about my studies, particularly in machine learning. If it was difficult for me to notice these precious earthworms, just imagine how difficult it would be for a convolutional neural net or transformer to notice them.
The answer is 100% difficult. Impossible [1]. Almost certainly these earthworms would be treated as “statistical noise”. Why? Because that’s how neural nets are trained. If a geometric regularity is not relevant to the objective function, then it is ignored by neural nets. In fact, it is “optimal” to ignore these irrelevances.
How different is this from humanity’s long history of imperialism, from Genghis Khan and the Roman Empire to the Dutch, British, United States, and China? Not very much, if you mean that what they all have in common is the relentless pursuit of singular goals, ruthlessly optimizing and expanding almost without regard to the consequences.
Today we live in a new age of digital imperialism dominated by companies such as Facebook, Google, Microsoft, Alibaba, Tencent, etc. These transnational corporations steal details of our most intimate personal lives, and use these neural net algorithms to assert domination and control over us. “Users” of digital “platforms” essentially become digital slaves, trading their private data, short term lusts, and conveniences for the hijacking of their minds to the detriment of their long term health and fulfillment.
And yet the irony is that these companies, the CEOs and board of directors running the digital world, think that they are doing us a favor. They are making the world a better place with new technologies, ignoring all of the consequences of unfettered use of the technologies. We have a euphemism for that in economics. “Externalities”.
Adam Smith was uncannily shrewd in making the following observation: “All for ourselves, and nothing for other people, seems, in every age of the world, to have been the vile maxim of the masters of mankind”.
We should all pay more attention to Adam Smith. His observation is more relevant now than ever.
[1] Technical note: Obviously this argument is based on the premise that the objective function is not trained to look for earthworms specifically. Real life computer vision systems currently suffer from ignorance, what they don’t look for. As long as the current mindset in the machine learning community keeps up, this will always be the case, and for every problem there will be a band-aid. But then there will always be another problem. It is really a sinking ship with a helpless captain. You should have made a better ship.