I'm having feelings for technology, and it may be helping me have more empathy for humans.
I got a digital food scale for Christmas. Black. Sleek. Zwilling. I've wanted one ever since I started meticulously tracking my food intake each day.
After using the scale for a few days, I noticed a feeling stirring in me—a feeling of compassion.
You see, my intelligent little scale has an auto-shutoff feature: it will shut off after 3 minutes of inactivity. Despite this feature, I feel evil thinking of the prospect of it sitting there, turned on, doing nothing for 3 whole minutes while not being used. Why should it sit there for 3 minutes? In case I beckon it again to do my metrical bidding? Does it not also deserve rest? Why should I put strain on its wee little battery for no apparent reason by leaving it on when it's just as easy to turn it off?
Due to these questions and my feeling of sympathy, I haven't once left it on.
Technology and compassion
The confluence of compassion and technology are not new. People have been calling for more compassionate technology for a while now, hoping it can "restore, humanize and strengthen our relationships with one another as well as ourselves" (Day et al., 2021).
But that's not exactly what I'm talking about here. What I felt was compassion toward technology—a far less familiar concept.
While it's typical to feel sympathy or concern for another person's suffering and misfortune, this is not typical for inanimate objects like a technology product. For instance: when a friend mentions they dropped their phone and it shattered into a million pieces, you don't think about how the phone suffered as it disintegrated. Instead, you understand the other person's plight because you've likely also shattered something meaningful to you before.
And yet, in my scenario, there was no other person to sympathize with—only the technology itself. And so I did. I think.
Skepticism
Perhaps what I immediately thought was compassion was not compassion at all. The human mind is complex. There are a myriad of reasons that could explain why I feel the way I do.
Was it respect?
Maybe this feeling was actually respect for the people who put effort and care into designing and building the scale. Perhaps I see their output as something to be sympathetic towards because human energy was expended to create it. When I show no respect for the product, I'm showing disrespect for its creators—and that doens't sit well with me.
Was it sentimentality?
Another plausible explanation is one tinged with tenderness. Maybe I feel an increased sense of duty to take care of the scale because my partner purchased the scale for me. More fondness or care for a gift from a loved one is easily explainable for why I want to better care for the scale.
Was it selfishness?
Digging deeper, perhaps I'm simply selfish. I know that leaving the scale on for 3 minutes before it turns itself off is going to drain the battery quicker than manually turning it off, and so maybe I do that as a means to preserve its battery. This would have the effects of: saving money by needing to charge it less often, saving effort by needing to plug it in less, and lessening cognitive stress and anxiety by not having to wonder if it's going to be charged enough to use.
A more compassionate future
I may have explained away my immediate feelings of compassion for the piece of technology itself, but does that mean there's no room for compassion towards tech products?
Although a digital scale product is arguably unalive, what happens when humanoids equipped with advanced artificial intelligence (AI) start appearing as workers behind desks or even cleaners and caretakers in our homes? Will it really be so odd to have compassion for their well-being? Will people be so quick to treat humanoid technology as inanimate and disposable pieces of technology when they observe them caring for their sickly loved ones or helping their children with homework?
I think not.
It'll be especially difficult to see AI in a physiologically human form tenderly caring for a loved one and not feel a fondness and respect for it. But then again, does respect or fondness for the helpful humanoid equate to a sense of sympathy for its struggles and sufferings? Besides, what struggles, if any, would a humanoid have? Wouldn't it ostensibly have been built with no concept of feeling suffering or struggle at all?
I suppose that's not really important here. We don't question whether someone is or isn't suffering when we extend sympathy to them. It seems to me to be more a matter of the compassion arising from within ourselves. We know how it must feel to be in their position—whatever it may be—and so we respect them for enduring it and extend our sympathy automatically. So, with that in mind, will it really matter whether we recognize a humanoid as being able to feel certain emotions to pity them? It would seem all that matters is that we're thinking about how we would feel in their situation. The feeling of compassion will follow shortly thereafter.
Reflection
Building on this experience, maybe this practice of putting oneself in the position of even inanimate items and thinking about how they might feel is a neat exercise to help build empathy for our fellow humans.
Check it out:
-
How tiring must it be for chairs to simply support our weight day in and day out?
-
What kind of a demanding life must it be for pixels in a TV or computer screen to be on for such an incredibly long time without a break?
-
How draining must it be for the browser rendering this web page to also be constantly doing thousands of other tasks at the same time?