top of page

Mental Healthcare Is No Longer a Monopoly: Authority And Autonomy in the Age of Artificial Intelligence


Yogarabindranath Swarna Nantha, Sean Thum, Dinyadarshini Johnson, 1st May 2026


“The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.” — Alvin Toffler


Two Different Scenarios, One Converging Outcome

            

Imagine this. Real eye witness accounts from two patients, walking into a clinic, only desiring guidance and a therapeutic relationship. What unfurled, can be summarized as follows


1.    A 74-year-old retired teacher, grieving the loss of her husband and burdened by anxiety, seeks psychiatric care after a prior medication experience marked by a cascade of side effects that led her to stop treatment abruptly. Her earlier encounter followed a familiar, transactional script: “Let’s start you on medication—take it, and we’ll review in three months.” There was little explanation—only instruction, with an expectation of compliance. Returning with a more collaborative intent, she raises her concerns and cautiously suggests a trial of an alternative medication, informed by her own reading (including ChatGPT). As recounted by the patient, the consultation then shifted abruptly, she experienced her doctor’s response as an outburst, followed by dismissal and a warning against non-human sources.


2.   A 25-year-old factory worker with generalized anxiety disorder, on long-term antidepressants, presents to a GP with poor sleep and body aches. Through ChatGPT, she identifies links between her symptoms and ongoing family conflict and workplace bullying—insights that lead to greater relief when explored. Her GP validates these concerns, who commends her thoughtful engagement with information. Her psychiatric care remains medication-focused, with repeated instructions to continue treatment and return in six months, leaving her feeling misunderstood—praised on “good days”, labeled non-compliant when performance is not according to script.


In these two situations, what began as an attempt at shared decision-making ends up regressing into a growing sense of disconnect and disengagement. What stands out is obvious—a fleeting attempt at living out an outmoded model of care. Prescriptive and authoritarian, this system of being leaves patients feeling unheard, marginalized and dis-empowered. At the heart of this controversy lies a central question: how has a system that champions patient-centered care and autonomy come to displace the patient’s own voice, leaving decision-making largely contained within a clinician-guided framework? 


The Paradox of Care 


The enduring ethos of medicine transcends time—to cure sometimes, to relieve often, to comfort always. What is seen as caring for others spans across a spectrum that is both technical and relational, requiring not only clinical expertise, but sustained engagement, trust, and understanding. Behind the scenes, however, a more unsettling dynamic unfolds - noble ideals clash with structural conditions that actively shape how patient care is delivered in practice. This conundrum, therefore, warrants closer examination. 


Doctors often experience a paradoxical ambivalence that may appear contradictory on the surface.They are trained to operate within hierarchical, tightly regulated environments that shape their clinical instincts and modes of engagement. Maintaining coherence and avoiding deviation become implicit expectations of practice. Within this context, a contradiction emerges. On one hand, doctors learn to operate within hierarchical systems of clinical oversight, often orienting them toward alignment with established modes of engagement and positioning them as recipients of direction within these structures. On the other hand, they are deprived of meaningful opportunities to express empathy through continuity of care—medicine’s ethical foundation calls for something more—an openness to patient narratives, a willingness to engage uncertainty, and a commitment to empathy and validation.

This creates a tension between the relational ideals of medicine and the conditions under which it is practiced. Doctors are indoctrinated under a system that trains them to follow orders within tightly regulated environments. 


In an almost Orwellian reenactment, “support” becomes performative: so long as doctors remain aligned with structured lines of unquestioned authority, both they and their patients are deemed “good,” adhering to a script that allows little deviation or discernible transformation. Within such a system, doctors become conduits of directives—sometimes felt as clinical “stings”—which they pass or discharge on to patients. Ultimately, the profession is venerated with blind fervour, its tenets treated as beyond question and its pedestal guarded at all costs. The consultation, then, risks becoming less a space for collaboration and more a subtle exposition of power: a command-and-control exchange where instructions flow downward, and patients are expected to comply rather than participate in having a say in their destiny. 


What presents as a corridor for patient engagement may be experienced as a challenge to established orthodoxy, recasting opportunities for dialogue as disruptions to routine patterns of care. In a troubling twist, this perpetuates a cycle in which patients, in the very act of seeking care, find themselves contending with yet another layer of control — imposed by those who ought, in spite of their limitations, to serve as their advocates. It is within these gaps that patients increasingly seek alternative sources of interpretation and support, contributing to a redistribution of autonomy, particularly in this age of AI, where avenues for sense-making extend beyond traditional clinical encounters.


Staying Above Water


We can infer, from the aforementioned clinical scenarios, that clinicians may be ill-equipped to traverse the vast and turbulent seas of human behaviour. Confronted with the depth of psyche—both their own and that of their patients—they are, at times, cast into the deep end of a psychological whirlpool without the harness required to navigate it or emerge unscathed. Lacking a secure anchor, this struggle can give rise to uncertainty that is concealed, and often unconsciously, projected onto the patient. Soon, a safer—yet potentially more dangerous—coping strategy arises: the oversimplification of complex, deeply human experiences into the more manageable terrain of diagnosable mental disorders (Maddux, 2008; Lane 2008).


While grappling with chaotic thought processes, often clutching at straws, some clinicians operate without sufficient therapeutic maturity. Though unintended, their actions can leave a trail of fretful encounters—interactions that carry enough weight to unsettle the psychological disposition of the patient and, at times, leave them more fractured than before. As a result, patients may unintentionally develop post-traumatic stress, built upon repeated affront to their conscience. The irony is that these very dynamics have been extensively documented by a psychiatrist himself: Bessel van der Kolk, in his seminal work The Body Keeps the Score


Perhaps most revealing is that these critiques are not external attacks, but voices from within the field itself. Yet, despite this internal dissent, many clinicians appear curiously reticent of these warnings that challenge the foundations of their own practice. 


But Those Glory Days Are Over


In a quiet storm, the ownership of knowledge is quickly shifting beneath us. Patients today—more than at any point in history—have near-unrestricted, unbridled access to information that was once guarded within the walls of the medical profession. What we are witnessing is a gradual leveling of knowledge: a lateralization in which patients actively assess their conditions, assume greater responsibility for their health, and demand care that aligns more closely with established standards of medicine.


After generations of deprivation from what was once a sacrosanct body of knowledge, patients are now observing—and participating in—the deconstruction of a hierarchical system that long governed autonomy in healthcare, dominated solely by the healthcare profession. What once resembled a class or caste divide in access to medical understanding is steadily crumbling.


The first rupture in this structure came with the advent of the internet. Today, that shift is accelerating rapidly under the breakneck speed in the development of artificial intelligence. Patients are no longer passive recipients or observers of care; they are increasingly equipped with tools that support self-management and informed decision-making. In doing so, they challenge the traditional role of clinicians as sole gatekeepers of knowledge, reshaping the balance of authority in modern medicine.


This new reality must be understood and embraced as the future of healthcare, where a more distributed and collaborative model of consultation is no longer optional, but compulsory. 


Sign of The Times


The shift taking place is neither optional nor temporary. Nowhere is this more evident than in mental healthcare. Patients today are no longer content remaining within a model of care where their role is limited to compliance. In an era of unprecedented access to information - and most importantly, access to interpretation - patient expectations have evolved in ways that are neither incidental nor reversible. 


The effects of this paradigm shift become more pronounced when viewed within the context of artificial intelligence (AI). In mental healthcare, AI does not simply introduce new tools; it alters the terrain on which care is delivered. Patients no longer show up with questions alone, but they are now armed with informed interpretations—shaped, refined, and sometimes reinforced by a plethora of evidence, so complex, that it defies ordinary human thought processes. 


However, such impressive recourse to a vast repository of knowledge does not, at least theoretically, supplant clinical expertise and wisdom. What is now clear is that the exercise of medical expertise requires reconfiguration to align with contemporary needs. In mental healthcare, authority that was once maintained through asymmetry of knowledge and position is becoming less tenable.  What matters now is the clinician’s ability to engage with the patient’s narrative, to assess  the interpretations they bring and to respond with empathy, structure and reasoning. The task is no longer to contain, but to work through—collaboratively, and with psychological depth. 


In a way, it is somewhat ironic that, in the age of AI, what is valued most is the clinician’s capacity  to remain distinctly human. Clinicians need to learn to operationalize conscience.


Rules of Engagement, Redux


For clinicians, this requires a re-calibration towards a modified authority grounded in reasoning, transparency, and continuity. We see that authority is not predicated on  information ownership, but on the ability to situate it—within the space, within a relationship, and within the lived experience of the patient. By default, the clinician inherits the role of a guide and mediator, assisting the patient in a shared navigation through the systems they engage with. 


For patients, the change is just as important. Autonomy in mental healthcare cannot rest on access to information alone. Patients no longer rely on a single point of authority. Their understanding is shaped by multiple inputs: clinical encounters, personal reflection, digital systems, and social environments. It calls for a higher standard of care—one that reflects the capacity to engage critically, to tolerate ambiguity, and to revise one’s understanding when necessary. Without this, participation risks becoming reactive rather than constructive. 


What is emerging is not autonomy as unsolicited independence, but autonomy as something distributed—across time, across contexts, and across sources of interpretation. This is where the structure of the system becomes critical. The equation of a traditional model of mental healthcare risks reintroducing—and perpetuating—the very dynamics it seeks to reconcile. It goes without saying when authority is implemented without necessary reflection, interpretation is more likely to be perceived as imposed than explored. The clinical space quickly mirrors the semblance of outdated hierarchies of the past. In such a situation, the patient is reminded of a glaring recidivism—the slide into a “command and control” power structure. They hold in contempt consultations are shaped by affectations, mostly marked by forced compliance that is stripped of understanding. Care must be seen delivered and developed in equal measure. 


The clinician must be cognizant of the broader context in which these interactions occur—the systems that have shaped how patients relate to themselves, to authority, and to uncertainty. This requires a different species of clinicians: one that is less directive and more discerning, less about resolution and more about understanding. When patients bring in AI-mediated interpretations, the task is not to dismiss them, nor to defer to them, but to work through them. Interpretation is no longer unilateral; it is co-constructed. 


A Safe Playground


In a similar vein, it is worth remembering that the art of medicine is shaped by empirical inquiry and disciplined by the organization of thought. It is not an exact science; uncertainty is inherent, and without structure, that uncertainty can become unsafe. Clinical expertise therefore remains essential—not as an instrument of control, but as a framework that allows exploration to occur safely and with purpose. The balance is delicate: too much rigidity stifles curiosity, while too little dissolves coherence. 


AI can reinforce existing biases, flatten complexity, and offer premature certainty in domains that require nuance. But ambiguity is not a limitation, but a defining feature of an unpredictable field such as medicine. To that end, closing our minds to the potential benefits of AI can be seen as not only counterproductive, but a brazen refusal of what is seen as an inevitable future. The right approach then is to take a stance where AI may provide answers, but at the cost of inquiry. Used well, AI can support the process of decision making. In the midst of these changes, the core of mental healthcare remains relational, iterative, and grounded in trust—none of which can be outsourced.


What is emerging is a redefinition of professional relevance. The clinician is no longer the sole interpreter and must reinvent themselves accordingly. This reconstitution of identity does not render them redundant; rather, their role becomes more refined. They serve as an anchor bringing clinical judgement into a space where multiple interpretations now exist. Power, then, is redistributed rather than usurped.


Keep Up Or Fall Behind 


The future of mental healthcare will depend less on who holds knowledge, and more on whether it can be worked through over time, within relationships that are stable enough to sustain it. What emerges, then, is not a competition between clinician and machine, but a redefinition of roles. 


The clinician must remain the anchor—the one who holds continuity, integrates meaning over time, and exercises judgment where no algorithm can. In this, the earlier point holds with greater force: the task is not to relinquish expertise, but to mature it. In the presence of AI, to guide without dominating, to interpret without imposing, and to remain steady in uncertainty is no longer aspirational; it is necessary.

 
 
 

Comments


© 2025 by The Insight Circle

bottom of page