Emily Oster opened her podcast “Raising Parents” by explaining that she is “unapologetically data-driven.” This struck me as odd: who is asking Oster to apologize?
After all, she’s become arguably the most popular expert on pregnancy and parenting precisely by using her credentials as an Ivy League data scientist. If anyone knows how data is now more prized than tradition or anecdotal experience, it’s her.
It reminds me of when my college professor would apologize before swearing. The taboo around swearing at a state school was nonexistent, so the professor actually had to reintroduce the taboo by prefacing her cursing with an apology in order for it to have the desired effect.
Likewise, the data-driven lobby is so ascendant, I feel like Oster’s “apology” is more meant to highlight how awesome it is to be data-driven rather than a genuine attempt to preempt complaints from the pro-tradition, pro-anecdotal experience crowd. While Oster’s goals of reducing SIDS or fetal alcohol syndrome are uncontroversial, “data-driven” as a buzzword is usually deployed without qualification or elaboration. The idea being that faster, more efficient, and more productive is inherently better. When I recently spoke with
Porcu, he argued that this is a characteristically Western approach.From the Reformer Popes, through the Protestant Reformation, the Enlightenment, and up to the present, he sees a utopian urge in the West that is restlessly searching for the perfect system. If you believe in the Myth of Progress — the idea that we’ve come out of the Dark Ages through applied science and reason — then it follows that solving the human condition is chiefly a problem of systems. In that sense, Porcu sees Pope Leo IX and Karl Marx as engaged in quintessentially Western projects, although they differ in their prescriptions.
There are two problems with this approach. One is that not everything in life can be resolved by systems. As Porcu says, you may just get martyred, and that cannot be fixed with a system. In fact, that wasn’t the solution historically either. As he put it, “We didn't establish Christendom by passing the right combination of laws. We established Christendom by dying in droves for three centuries.”
The other issue is that valorizing systems assumes a neutrality that doesn’t hold up under scrutiny. Porcu criticizes secular nation-states for claiming a government and society does not need to be explicitly ordered toward a particular Good. That premise is breaking down under the weight of an increasingly pluralistic society. Likewise, we should always ask what a system is optimizing for and what costs it will bring rather than assuming a pure, neutral, “data-driven” process.
As a hypothetical, he said to imagine that Silicon Valley was run by monks: their advanced technologies would probably not have included infinite scrolls and dopamine feedback loops specifically designed to engender addiction. TikTok is frictionless, efficient, and a highly sophisticated product, but it is optimizing for engagement and is thus destructive in all sorts of ways. That’s a data-driven product, too, but it lacks a healthy telos.
Or consider this viral video that presents “the morning routine that saved my life.”

Enable 3rd party cookies or use another browser
As
has pointed out, this life is missing one thing: other people. And it seems like they’d only get in the way of his “Waking up without an alarm, bible study, journaling 10 things I’m grateful for, a love letter to God, my daily power list, wim hof breathwork on my PEMF mat, meditation for 10 minutes w/ red light and prayer. Then I take my supplements, hit the gym, cold plunge, swim, and sauna before making my morning coffee and getting to work.”These “secular monks” are highly disciplined and data-driven, but their optimization feels cold and dehumanizing when it’s fully realized. He claims this “saved” his life, but what kind of a life remains?
This question of optimization is on my mind after OpenAI recently acquired Jony Ive’s io for $6.5 billion. The idea is to build an AI assistant who will be aware of our surroundings and go everywhere with us. The business proposition is pretty clear: a secretary, research assistant, and friend at our beck-and-call. What once was reserved for CEOs and tenured professors now offered to all. Sam Altman estimates this acquisition could add 1 trillion in value to OpenAI. It could theoretically automate most of our lives.
The question is, what kind of a life would remain?
As Nicholas Carr recently wrote, when a person automates a task, one of three things always happens.
Their skill in the activity grows.
Their skill in the activity atrophies.
Their skill in the activity never develops.
Which scenario plays out hinges on the level of mastery a person brings to the job. If a worker has already mastered the activity being automated, the machine can become an aid to further skill development. It takes over a routine but time-consuming task, allowing the person to tackle and master harder challenges. In the hands of an experienced mathematician, for instance, a slide rule or a calculator becomes an intelligence amplifier.
If, however, the maintenance of the skill in question requires frequent practice — as is the case with most manual skills and many skills requiring a combination of manual and mental dexterity — then automation can threaten the talent of even a master practitioner. We see this in aviation. When skilled pilots become so dependent on autopilot systems that they rarely practice manual flying, they suffer what researchers term “skill fade.” They lose situational awareness, and their reactions slow. They get rusty.
Automation is most pernicious in the third scenario: when a machine takes command of a job before the person using the machine has gained any direct experience doing the work. Without experience, without practice, talent is stillborn. That was the story of the “deskilling” phenomenon of the early Industrial Revolution. Skilled craftsmen were replaced by unskilled machine operators. The work sped up, but the only skill the machine operators developed was the skill of operating the machine, which in most cases was hardly any skill at all. Take away the machine, and the work stops.
We already have seen how other efficiencies can make us mentally flabby. We tend to conflate what we know with what we can look up and trade the quality of experiences for a higher quantity of them. It's highly efficient to quickly look up that director’s name, the directions, or to send a text message — but we might diminish our own memory, navigational skills, or sense of connection in the process. That might be an acceptable trade-off at times, but with AI, the danger would be outsourcing not just memory or communication but cognition entirely. The deskilling in this case wouldn’t be in blacksmithing or textiles, it’d be in the basic operation of your own life — what you like, what you think, what you love.
Marshall McLuhan said that every technology augments and amputates certain capacities. If the augmentation in the case of an AI assistant is 24/7 access to a butler/tutor/personal assistant, the amputation could border on a lobotomy. There’s a great vignette in Dave Eggers’ satire The Every where a character has become so dependent on her virtual personal assistant (called an OwnSelf) that she’s unable to figure out how to go up one floor. It’s described in a medical study on long-term consequences of heavy use of OwnSelf:
Subject 277 was found today at the bottom of the stairwell, unable to discern how to get to the second floor. Her OwnSelf had not been updated. Subject was conscious of the humor in the situation, but was still unable to conjure a way to get to the second floor without OwnSelf guidance. She laughed about her failure and was quite apologetic. When offered the chance to cease the OwnSelf experiment, she could not make that decision, either.
Yet, people still onboard these technologies unthinkingly, in part because the Western desire for constant optimization and improvement is so deep-rooted. We are primed to believe the story of AI boosters: unimaginable efficiencies will finally bring the long-awaited End of History. It’s just an issue of getting enough energy and compute. Just a systems issue. Of course, taken to an extreme, that thinking can justify any cost. AI refusing to get shut down, blackmailing engineers, or lying to testers are trivial concerns when utopia is within our grasp.
By contrast, Dr. Porcu advised:
[People are] looking for the system that fixes the problem of life, but there is no system that fixes the problem of life. There's only life Himself and becoming Him, becoming part of Him. That's how you fix the problem of life: through holiness.
He went on to explain how the systems of Eastern Orthodoxy are undoubtedly helpful — Godparents, Forgiveness Vespers, regular fasts, etc. — but just getting dunked doesn’t solve your problems. Bishops will still be defrocked, Patriarchates may excommunicate one another, and, in the words of St. Anthony, we should expect temptation until our last breath. The pursuit of sainthood cannot be automated, delegated, or systematized: “We're fundamentally a human person-centric church. You have to do it. You have to get up, and you have to not sin.”
Part of that is because holiness cannot be understood or acquired simply through data analysis. Porcu described how there are participatory truths that can only be known through firsthand experience. He likened it to joining a family. You can observe all you like, but once you marry in, you begin to understand the customs from within. Eventually, you sense whether things are characteristic of the family by intuition rather than via intellection. He explained how the Church Fathers taught that prayer was the highest capacity of the mind, not mere computation. AI may well ratchet up efficiency and intelligence, but that isn’t an end in itself. As Porcu quipped, there’s a reason we have the archetype of the “evil genius”:
The intellect is not good in an unqualified sense because obviously you can do evil with it…More IQ makes you better? Not if you're Dr. Octopus. The rightly directed intellect toward The Good™. That's what intellect is for.
Thus, the common line offered to Orthodox inquirers is “come and see.” Don’t worry about reading books or watching videos: just come to the Divine Liturgy for yourself. You might say this is an attempt to appeal to or develop the mental faculty of prayer. And many find that after one visit, they don’t want to leave (though they might not have a perfectly logical explanation for why that is).
If the Orthodox answer to the problem of life is holiness and union with Christ, it does seem like there is a growing lobby calling for union with AI. Both promise deliverance from the uncertainty and pain of life. Both will remake us in their image. Christ offers to make us by grace what He is by nature. AI offers to make us by conditioning what it is by computing. One is trusting a Person. The other is trusting a system. Marshall McLuhan pointed out how Psalm 115 could be applied to our technological society:
Why should the Gentiles say,
“So where is their God?”But our God is in heaven;
He does whatever He pleases.
Their idols are silver and gold,
The work of men’s hands.
They have mouths, but they do not speak;
Eyes they have, but they do not see;
They have ears, but they do not hear;
Noses they have, but they do not smell;
They have hands, but they do not handle;
Feet they have, but they do not walk;
Nor do they mutter through their throat.
Those who make them are like them;
So is everyone who trusts in them. [emphasis mine]
To paraphrase David Foster Wallace, the compelling reason for maybe choosing to worship God is that pretty much anything else you choose will eat you alive. We’re already seeing worship of AI go mainstream, and whether you think this is plain and simple delusion or something more sinister, I expect this trend will only pick up steam as the models improve. We will see what becomes of these new worshipers.
Do you remember Pokémon Go, that game where people traveled around the real world to collect Pokémon in a phone game? It seems harmless, but it’s a good example of how easily people could be manipulated without even realizing it. By placing Pokémon in a video game, the game creators were able to drive physical traffic to specific locations and then use that data to refine their AI mapping models. It’s population level behavior modification occurring below conscious awareness. That’s a dry run.
Think about how Spotify already nudges users toward AI-filled playlists of “ambient, classical, electronic, jazz, and lo-fi beats” in order to avoid paying royalties on background music. Or look at how Netflix is encouraging “casual viewing” — aka movies and TV shows to play in the background — in order to pump up ad revenue, their most exciting business line. The common thread is a pliable, passive viewer. Now extrapolate that trend to AI assistants and integrations throughout our life. The owners of those technologies will have leverage, motive, and numerous case studies on milking users for profit. If they are adopted unquestioningly, rather than free us for more human tasks, they are likely to make us even more dependent and machine-like.
As Wendell Berry put it, “It is easy for me to imagine that the next great division of the world will be between people who wish to live as creatures and people who wish to live as machines.”
The choice is yours, for now. Choose wisely. Your life may depend on it.
This is one of the best things I've read on here in a while. Thank you 🙏
Creatureliness please! Thank you for this lovely and thoughtful article.