Pro or Anti Human?
If you haven’t been paying attention to the noises Tristan Harris & The Center for Humane Technology have been making lately, it’s worth perking up your ears…or eyes in this case. Rather than being overly cynical and doomsaying in their analysis of AI, they have a realistic and direct message about its benefits and dangers. Too often online, we see the extreme poles of both AI enthusiasts blindly following whatever drivel tech companies shovel, and AI pessimists, blindly dismissing any reasonable use of the tech. What Harris and the CFHT are doing is communicating directly to the majority of us who fall somewhere in the middle: The people who are excited about the potential benefits of this tech, while also not wanting our world to descend into a cyberpunk dystopia where a handful of people control the vast majority of the wealth…y’know, even moreso than now.
There is a lot to unpack in their message, but one of the main takeaways is that AI is technology unlike any other that has preceded it, and with that comes a great deal of risk. However, the majority of people have just not gotten the message. Humans aren’t hardwired to easily mobilize en masse to resist rapid change. It feels like our individual actions are not going to be effective and so we become resigned to whatever outcome this fast-moving steamroller is gonna leave us with.
This is a cognitive weakness that the rich and powerful exploit every second of every day to their own ends. This is nothing new. For the past fifteen years or more, the social media experiment has led to numerous breakdowns in society. Increases in anxiety, depression, isolation, and the sexualization of children have all been credibly attributed to social media use. And we don’t really need the data here do we? We’ve all been part of the social media experiment and have felt the results ourselves.
I’ve harped on the evils of social media addiction in other articles, but what I want to emphasize here is the speed of the results: We find ourselves today, a society more divided and depressed than ever. And that only took about 15 years!
In the past three years AI has gone from being barely better than a search engine, to truly powerful tech that has the capability of replacing 11.7% of the workforce in the USA alone, representing a potential $1.2 trillion in lost wages. It is estimated that within the next four years, AI will be able to replace 9.1% of the global workforce. If you thought the rate of societal change was too fast between 2008 and 2023, then I’ve got some bad news for you.
I don’t think the speed of change itself is the problem. Rather the recklessness of the implementation. In the race to be the first to market with any new features, things are not safety tested. Once again, we are prioritizing profits over human well-being. In recent years there has been a large movement to disconnect from social media, but we have been slow to respond, and much of the damage has already been done. Let’s not make the same mistakes with AI.
The cost of a data-driven world
Listening to Harris’ message has got me thinking a lot about the concept of “value.” Initially, I found myself reflecting the value we place on data and tangible results. Going all the way back to the 80s and 90s, we saw a rise in the hunger for quantifiable data in every sector of our lives. If you worked in education, business, law, government, insurance, sales, or virtually any other area in this period, you certainly witnessed the supremacy of data-driven initiatives.
On the face of it, this might seem like a great thing. Better data ought to equal better decisions. However, we’ve all seen the reality. Too often the numbers haven’t been taken from suitable sample sizes, the analyses have been tailored to fit preconceived notions, and correlations are treated with a gravitas that should be reserved for causations. Truthfully, who amongst us actually finds benefit from slideshow meetings filled with highly suspect charts and numbers?
It’s not that data itself is bad, but it often misses the human element of a situation, boiling our shared reality down to quantifiable points. In many cases we will actually change what we do in an organization so as to make it easier to gather data on the outcomes. That’s completely backwards! Rather than choosing a path most likely to generate positive results for people, we choose the path most likely to generate quantifiable data.
This, I believe, is one of the main reasons why our education system is failing so hard year after year. We have reduced the “human element” that teachers bring to their lessons, and replaced it with standardized tests & shared lessons where everyone teaches the exact same thing so it can all be measured and compared. This results in over-testing easily digestible information at the expense of taking time for deep-dives into interesting topics, and the fostering of individual creativity. I know not all of us are in education, but even if you don’t have a horse in that race, I bet most of you have similar gripes with your own workplaces.
In short, we have become a society where we value quantifiable data more than qualitative human experience. With this hyperfocus on tangible results, we have set the stage for an anti-human future because AI is going to continue to improve its ability to achieve quantifiable outcomes much faster and much easier than ever. If we don’t shift our societal values to be more pro-human, we will see a future where AI is valued more than people.
If you doubt that conclusion, just wait until the section after this next one.
Instrumental vs. Intrinsic Value
We’ve all seen examples of this:
Why study fine arts? What kind of job will that give you? What can you do with that?
Real winners grind. Don’t be wasting your time playing games, watching TV, etc. Get out there and do something worthwhile!
Why are you writing a think-piece on value when no one is gonna read it? (Hey now, let’s not make this personal)
The underlying assumption driving the above perspectives is that something is only valuable if it gets you something else. An assumption that the “value” of such things, like a fine arts education, are only instrumental.
Instrumental value means that something is only valued if it is useful to accomplish something else. A common example of something with instrumental value is money. A $100 bill does not have value by itself. Its only value is its potential to be exchanged for something else. The same is true for a hammer, or a movie ticket, or even a stop sign. These things are instruments which we can use to obtain something we value.
But don’t some things have intrinsic value? Aren’t some things just good? Full stop. No further reason needed. What about knowledge? Education? Love? Happiness? Are these things valuable? Are they valuable only because they can get us something, or are they good in and of themselves?
The logic of instrumental value necessitates the existence of things with intrinsic value. If you keep scrutinizing the rationale, eventually you’ll dig down and hit the bedrock of intrinsic value:
Why is money valuable? Because I want that new shirt.
Why do you value the new shirt? Because it’s fashionable and I look nice in it.
Why do you value looking nice? Because I feel good when I look nice.
Why do you value feeling good? Because…feeling good is good.
The problem with only seeking instrumental value is that it leads to infinite regression if you only ever fill it with something else of instrumental value. Leaving a hole that you’ll never fill and be driven to keep consuming. (Why yes, the hyperfocus on instrumental value is a tool of capitalism!)
The intrinsic value of people
The problem with trying to explain why something might have intrinsic value is that it defies argumentation. When we want to persuade others of our point of view, we need to provide reasons. So, if you want to argue something is valuable, it is near impossible to find reasons without appealing to instrumentation. Why is it good to feel fulfilled? Um…because it is. If someone doesn’t share your instinct on this, then it is literally impossible to argue them into the position with reason.
And so, people continue to be unconvinced when I say something like, philosophy is intrinsically valuable. If someone is convinced that value only comes with utility, and they don’t see a use for philosophy, they’ll conclude its all a bunch of wank.
But what about people? What about the well-being of conscious creatures? Do we really need an argument for their value? Do people need to have a “use” in order to have value?
If you find yourself answering “yes” then I fear you may be too far gone. But for the rest of us, we need to take action and take it now before it is too late. Don’t believe me? Have you seen how the leaders of the AI revolution are currently talking about people?
When pressed about the vast natural resources AI requires, Sam Altman recently responded by stating “…it also takes a lot of energy to train a human…It takes about 20 years of life – and all the food you consume during that time – before you become smart.”
There are plenty of examples already to illustrate how these AI tech leaders see us, and it should worry us all very much. The leaders of this AI revolution who are charging forward, putting us all at risk, do not see the intrinsic value of humans.
And they are slowly trying to make you feel the same way. One insidious side effect of the extreme polarization and tribalism fostered by social media platforms is the devaluing of other human beings. It has become very common to refer to others as NPCs (non-player characters). A term taken from video games, where you— the player/main character— interact with programmed characters in the make-believe world. You know they aren’t real people, but having them around makes the game fun for you. (I’m ride or die for Tali and Garrus!) Taking this concept into our real-world interactions is a dangerous signal, especially when it is used to dehumanize people who are in different political alignments.
What can we do?
We need to insist on legislation that protects our data and compensates us when it is used. We need to elect representatives who prioritize initiatives to protect people and ensure a decent quality of life in the face of job loss. We failed to accomplish this with social media. Companies like Meta made billions off our our data without any compensation. We can’t make the same mistake this time. These AI models are trained on our hard work and they are going to take our jobs. That’s fine! I’d love more free time! But if we’re gonna enjoy that future, we all need to be compensated for our contributions to the system.
But they don’t value us. They’re not going to give us anything willingly.
We have been slowly coaxed over the past few decades to believe that intrinsic values are flights of fancy, and that all that matters are tangible results. AI presents us with the real possibility of taking this point of view to its logical extremes. Don’t let them do it.
Resist art created by generative AI. Prioritize supporting people who create
Join organizations like the Center for Humane Technology
Resist the urge to dehumanize people online and in your everyday life
Follow, share, and discuss articles on AI, especially like the ones linked above that highlight the misanthropy of AI leaders
Don’t let yourself get fatigued! There is going to be a lot of people poo pooing you. Telling you that it’s “not a big deal,” or asking “are you still talking about that?” We are not hardwired to sustain attention on broad social movements, so we have to actively resist our urges to disengage.
It’s not too late. It’s not hopeless. But it is an uphill battle. Are you prepared?
In good faith, and goodwill,
S.D. Miller
If you would like to support my work, you can always buy me a coffee using the link below!

