January 2026
Artificial intelligence has become a normal part of my daily routine. I use it to organize ideas, work through writer’s block, and make my workflow more efficient. I’m even using AI to help write this post right now, which makes the question feel more personal than theoretical. Because of that, I am not against AI. In fact, I find it genuinely useful. What interests me, though, is not whether AI helps, but what happens when we begin to rely on it so consistently that it starts to feel less like support and more like a substitute for our own thinking.
I use AI constantly for more than just big projects. It helps me answer quick questions, format my ideas, find and organize sources, and clarify concepts when I’m stuck. Sometimes it feels like having a tutor available 24/7. That’s why it’s easy to take it for granted. The problem is that when AI becomes the first place I go instead of the last, I start to wonder how much of my thinking is actually mine. It’s not that AI is bad — it’s that I don’t want to lose the ability to struggle through an idea, make mistakes, and still come out with something original. I want to use AI to support my work, not replace the part of me that learns by doing.
This question is at the center of Jonas Rodrigues’s post, The AI Balancing Act: Your New Superpower, Not Your Substitute. Rodrigues argues that the real challenge of this moment is not learning how to use AI, but learning how to use it without losing the skills that make us human in the first place.
“The future belongs to those who know how to use AI. But the present requires us to figure out how to use it without losing ourselves in the process.”
I agree strongly with this framing. AI is not going away, and pretending otherwise only puts us at a disadvantage. Rodrigues’s comparison of ignoring AI to refusing to use a calculator during a math exam feels accurate: the tools are here, and they are already shaping how work gets done. Where his argument becomes most compelling, however, is in his warning about crossing the line between amplification and dependence.
Rodrigues draws a clear distinction between AI as a tool and AI as a crutch. When AI is used as a tool, humans remain in control, setting direction, exercising judgment, and making the final decisions. When AI becomes a crutch, critical thinking is quietly outsourced. Ideas arrive fully formed, friction disappears, and the discomfort of the blank page is avoided entirely.
This distinction resonates with my own experience. AI has made starting easier, but it has also made me question how often I reach for assistance before I sit with my own thoughts. When ideas come too quickly and structure is always suggested, it becomes harder to tell where my thinking ends and the machine’s begins. The danger is not laziness, but erosion—specifically, erosion of mental autonomy.
One example happened recently when I was writing a paper. I opened a blank document, and instead of brainstorming on my own, I immediately asked AI for an outline. The outline was helpful, but it also felt like I was letting someone else do the thinking for me. After using it, I realized I had not even tried to generate ideas myself. That moment made me wonder: if AI continues to make everything easier, will I still know how to think independently when it is not available?
Rodrigues describes one consequence of overreliance as the rise of “beige” content: work that is technically correct, polished, and ultimately forgettable. Because AI is trained on averages, it excels at producing what is acceptable rather than what is distinctive. Without human judgment and lived experience shaping the final product, originality flattens into uniformity.
This concern connects closely to ideas explored by MIT professor Sherry Turkle, whose work focuses on how technology reshapes human behavior and thought. Turkle does not argue against technology itself. Instead, she asks us to pay attention to what we trade away in exchange for convenience.
“We expect more from technology and less from each other.”
While Turkle was not writing specifically about generative AI, her observation feels especially relevant in this context. When we begin to expect AI to think, write, and create for us, we may also begin to expect less from ourselves. The issue is not that AI removes effort entirely, but that it reduces our tolerance for the struggle that produces original thought.
This tension is already visible in art and social media. AI-generated images, captions, and videos increasingly blend seamlessly with human-made content. While this can be impressive, it also makes authenticity harder to recognize. When everything is optimized, polished, and immediate, human experience risks becoming interchangeable.
Looking forward, it is worth asking whether this shift could extend into other areas of life. Sports, for example, are grounded in human limitation—discipline, failure, and effort. While AI already plays a role in analytics and training, imagining a future where performance itself is increasingly mediated by machines challenges our understanding of achievement. If effort becomes automated, what happens to meaning?
The word that seems most at stake here is autonomy. Human autonomy is not just the ability to make choices, but the ability to develop ideas independently. If AI consistently removes friction from thinking and creating, we risk losing confidence in our own judgment, even as productivity increases.
This does not mean AI should be rejected. Rodrigues is right to frame AI as a co-pilot rather than an autopilot. Used intentionally, it can free up time for deeper thinking, creativity, and strategy. Used uncritically, it risks replacing the very processes that give our work meaning.
I don’t want to stop using AI. I want to remain aware of how I use it. The challenge of this moment is not choosing between humans and machines, but learning how to preserve human thought, effort, and autonomy in a world where assistance is always one prompt away.