top of page

Hands, Judgment, and the AI Storm

  • Writer: Dr. Lucas Marchand
    Dr. Lucas Marchand
  • Dec 19, 2025
  • 15 min read
A close-up of human hands performing a medical examination or adjustment, with subtle digital circuitry or AI neural network patterns faintly overlaying the image in transparent blue light. The hands should look skilled and caring, caught mid-motion against a clean, slightly blurred clinical background. The mood should be contemplative and professional, blending the warmth of human touch with the cool precision of technology. Cinematic lighting, shallow depth of field, modern medical aesthetic.

Lucas Marchand, DC 12.19.25


There is a small, almost comforting thought that circulates among people who work with their hands: at least the robots can't do this yet. In Chiropractic, manual care carries an added sense of security. Touch feels ancient, human, and stubbornly resistant to automation. For now, it still requires judgment, context, and something difficult to quantify, experience accumulated fingertip by fingertip.


But comfort has a way of dulling vigilance.


As artificial intelligence surges forward, professions built on physical skill have been tempted to watch from the sidelines. The reasoning is understandable. Algorithms don't palpate tissue. Large language models don't feel joint resistance. A machine can describe pain, but it does not yet interpret it through the quiet feedback loop of hand, eye, and patient response. Thank goodness we still use our hands.

And yet, history suggests that this sense of safety may be temporary.


A few years ago, radiology was held up as the profession most likely to vanish first. The logic seemed airtight: pattern recognition, massive datasets, image interpretation—surely a machine would outperform a human eye. Medical students were warned away. Commentators predicted extinction.


What happened instead was something far less cinematic. Radiologists did not disappear. Their numbers grew. Demand increased. AI became an assistant, not a replacement. The lesson was not that AI failed, but that we were too eager to declare the future finished before it had properly begun.


That pattern is repeating now, only faster.


We are still in the infancy of this technology. Even at today's pace of improvement, it is difficult to imagine fully entrusting complex, high-stakes human care to systems that remain brittle, confident, and occasionally wrong. AI is extremely useful—but it is not yet wise. And wisdom, in medicine, is the difference between efficiency and harm.


What worries me most is not whether AI will become capable, but when we begin to rely on it too early. The brain needs friction. It needs struggle. It needs the slow discomfort of not knowing. Tools that remove that struggle prematurely risk producing clinicians who are faster, but thinner—efficient, but fragile.

The storm is coming either way. The question is not whether we will face it, but whether we will still recognize what makes us human when we do.

The Radiologist That Never Disappeared

In 2016, Geoffrey Hinton, the godfather of deep learning, made a stark prediction at a machine learning conference in Toronto. "We should stop training radiologists now," he said. "It's just completely obvious that in five years, deep learning is going to do better than radiologists." The statement ricocheted through medical schools and residency programs. Applications to radiology programs began to dip. Students reconsidered their futures.


Eight years later, radiology is thriving. According to the American College of Radiology, the field faces a persistent shortage. The number of practicing radiologists has increased, not decreased. Imaging volumes have exploded, and AI tools have made radiologists more productive, not obsolete. What was supposed to be a cautionary tale of technological displacement became instead a lesson in humility.


The error was not in recognizing AI's potential but in underestimating everything else. Radiology is not merely pattern recognition. It requires clinical context, communication with referring physicians, and judgment about what findings matter in a particular patient's story. AI can flag abnormalities with impressive speed, but it cannot yet navigate the messy reality of human medicine: the patient with a concerning finding who refuses follow-up, the incidental discovery that changes nothing, the image quality compromised by patient movement.

More importantly, AI did not reduce the need for radiologists—it increased the throughput and created more demand for their expertise. Screening programs expanded. Imaging became cheaper and more accessible. The bottleneck shifted, but it did not disappear.


This is the first cautionary note for those of us in hands-on professions. Predictions about AI are reliably overconfident in the short term and often miss the complexity of how work actually happens. We should take AI seriously without taking the doomsayers at face value.

AI Is Young—Unsettlingly So

It is easy to forget how recent this technology truly is. ChatGPT launched in late 2022. The current generation of multimodal AI models is younger still. We are, at best, three years into a revolution that people are already treating as settled science.


This youth matters because it means we are still learning what these systems can and cannot do reliably. The demos are impressive. The hype is deafening. But the gap between a controlled demonstration and real-world deployment in high-stakes environments is vast.


I have heard of students using AI to generate differential diagnoses, draft notes, and study for exams. The results are often startlingly good. But "often" is not the same as "always," and in healthcare, the difference is everything. AI can produce plausible-sounding explanations for symptoms that are completely wrong. It can confidently assert facts that do not exist. It can miss subtleties that a tired intern would catch.


The temptation to deploy this technology widely and quickly is immense. The efficiency gains are real. But we are still in the phase where caution should outweigh convenience. Five years feels like an eternity in technology, but it is a blink in medicine. The systems we are building now will shape clinical training and practice for decades. We cannot afford to rush.

Useful, But Not Wise (Yet)

The current generation of AI excels at certain tasks. It can summarize information quickly. It can generate text that sounds authoritative. It can spot patterns in data that would take humans hours to find. For administrative work, documentation, and preliminary analysis, AI is already transforming workflows.


But wisdom is something else entirely. Wisdom is knowing when the textbook answer does not apply. It is recognizing that a patient's social context matters more than their lab values. It is the ability to sit with uncertainty and still make a decision. It is the quiet accumulation of cases, mistakes, and recoveries that teaches you not just what to do, but when to do nothing.


AI does not yet have this. It has speed, but not discernment. It has information, but not experience. It can tell you what is typical, but it struggles with what is right for this particular person in this particular moment.


This is why the current role of AI should be assistive, not autonomous. It should be the research assistant who pulls up relevant studies, the scribe who drafts the note, the second set of eyes that flags something you might have missed. It should not yet be the clinician making the call.


The danger is that we forget this distinction. AI outputs are seductive. They arrive fully formed, confident, and authoritative. They do not display doubt. And in a profession where time is scarce and cognitive load is crushing, the temptation to defer to that confidence is strong.

The Cost of Cognitive Crutches

There is a moment in medical training when knowledge begins to feel instinctive. You no longer have to consciously recall the steps of a physical exam or the criteria for a diagnosis. Your hands know where to press, your mind knows what to ask, and the pattern recognition happens almost automatically.


This does not happen by accident. It happens through repetition, struggle, and failure. It happens by seeing hundreds of patients, making mistakes, and learning to trust your judgment. The cognitive load of those early years is brutal, but it is also formative. The friction builds the foundation.


What happens when we remove that friction too early?

I have spoken with educators who worry that students trained with AI assistants from day one will develop differently than those who came before. Not worse, necessarily, but different. Faster at finding answers, perhaps, but less comfortable sitting with uncertainty. More efficient at documentation, but less skilled at synthesizing information independently.


This is not a hypothetical concern. We have seen it before with other technologies. Residents who trained after electronic health records became ubiquitous are faster at navigating screens but sometimes struggle with the physical exam skills their predecessors learned by necessity. GPS has made us all better at reaching destinations but worse at building mental maps.


The risk is not that tools make us lazy, it is that they change what we practice. And what we do not practice, we do not develop. If AI becomes the default way to generate a differential diagnosis or interpret a finding, we may produce a generation of clinicians who are excellent at verifying AI outputs but less capable of generating insight independently.


The question is not whether to use AI, it is when to introduce it and how to ensure that the underlying skills are not lost in the process.

When AI Becomes 'Truth'

There is a troubling shift happening in how people interact with AI-generated information. When someone Googles a medical question, they understand they are sifting through sources of varying quality. They know to be skeptical. They click through to articles, compare information, and recognize that the internet is full of noise.


But when someone asks an AI a question and receives a clear, confident, well-written answer, something changes. The response feels authoritative in a way that a list of search results does not. There are no caveats, no competing voices, no obvious seams. The AI does not say "I think" or "it depends." It simply answers.

This creates a dangerous illusion of certainty. People begin to treat AI outputs as settled facts rather than generated text based on probabilistic patterns. In medicine, this is particularly hazardous. A patient who asks an AI about their symptoms and receives a plausible diagnosis may cling to that explanation even when it is wrong. A clinician who uses AI to draft a note may not scrutinize it as carefully as they would their own writing.


I have done this firsthand. I recently used an AI tool to help prepare a patient education handout. The output was polished and professional. It was also subtly incorrect on a key point, plausible enough to slip past a quick review, but wrong enough to matter. The error was caught, but only because the I happened to know the topic well. How many errors are not caught?


The deeper risk is epistemological. If we begin to treat AI as an oracle rather than a tool, we outsource not just the work of finding answers but the work of evaluating them. We stop asking whether the answer makes sense and start asking only whether it sounds good. In a profession built on evidence, judgment, and accountability, this is a profound shift.

Garbage In, Garbage Out—Even Now

One of the enduring truths about technology is that it amplifies what you put into it. A spreadsheet does not fix bad data. A calculator does not correct faulty math. And AI does not compensate for poor clinical reasoning.


This is easy to forget because AI feels smart. It produces outputs that sound knowledgeable and coherent. But the quality of those outputs depends entirely on the quality of the input—and the user's ability to recognize when the output is wrong.

You'll see students feed vague or incomplete clinical scenarios into AI tools and receive impressively detailed responses. The problem is that the responses are often based on assumptions the AI made to fill in the gaps. If the student does not recognize those assumptions or question them, the entire exercise becomes a form of sophisticated guessing.


Expertise is still required to use AI well. You need to know enough to ask the right questions, to recognize when an answer is implausible, and to verify the output against your own knowledge. AI does not replace clinical judgment—it demands more of it.


This is why the idea that AI will democratize expertise is only half true. AI can make information more accessible, but it cannot make people better at evaluating that information. In fact, it may do the opposite. By making it easier to generate plausible-sounding answers, AI can obscure the difference between knowledge and performance.


The risk is that we produce clinicians who are excellent at using AI but poor at functioning without it. When the tool fails, when the internet is down, when the system produces nonsense, they will not have the judgment to recognize it or the skills to proceed independently.

It's Not AI vs Humans

The framing of AI as an existential threat to human workers is both dramatic and misleading. The real division is not between humans and machines but between humans who adapt to new tools and humans who do not.


History is full of examples. When electronic health records were introduced, there was resistance. Clinicians complained that typing interfered with patient connection, that the systems were clunky, that medicine was becoming data entry. Some adapted quickly. Others retired early. The profession moved on.

The same pattern is likely to unfold with AI. Some clinicians will integrate it seamlessly into their workflows, using it to handle tedious tasks and free up cognitive space for higher-level thinking. Others will resist, either out of skepticism or inertia. The gap between the two groups will widen.


This is not a moral judgment. Adaptation is hard, and the pace of technological change is exhausting. But in a profession where outcomes matter and competition is real, staying static is not a viable strategy. The clinicians who thrive in the next decade will be those who learn to leverage AI without becoming dependent on it.

The key is to maintain agency. AI should be a tool you control, not a system that controls you. It should enhance your judgment, not replace it. It should make you more efficient, not more passive.


This requires conscious effort. It means questioning AI outputs, verifying information, and staying engaged with the intellectual work of chiropractic. It means using AI for what it does well: pattern recognition, summarization, documentation, while preserving the skills that make you indispensable.

When the Robots Learn to Feel

For those of us in hands-on professions, the assumption has been that physical touch remains a protective moat. Palpation, joint mobilization, tissue assessment—these require tactile feedback, spatial reasoning, and real-time adjustment. Surely, we think, this is beyond what machines can do.


But that assumption is eroding faster than expected.


Robotics is advancing rapidly, and tactile sensors are becoming increasingly sophisticated. Researchers have already developed robotic systems capable of detecting tissue compliance, measuring pressure with precision, and adjusting force in response to feedback. Surgical robots have been performing delicate procedures for years. Rehabilitation robots are learning to guide movement and provide graded resistance.


The technology is not there yet for complex manual therapy, but the trajectory is clear. Within the next decade, we are likely to see robotic systems capable of performing basic soft tissue work, joint mobilization, and possibly even spinal manipulation. The question is not whether it will happen but when and how well.

This should concern even the most hands-on clinicians. The work we do is not magic. It is skill—learned, practiced, and refined. And skills, by definition, can be replicated. If a machine can learn to palpate tissue, assess joint play, and deliver a controlled force, then the assumption that manual care is automation-proof collapses.


What remains is judgment. A robot might learn to adjust a spine, but can it decide whether an adjustment is appropriate for this patient on this day? Can it recognize when the patient is guarding, when they are anxious, when they need reassurance more than treatment? Can it adapt when the textbook approach does not fit?

This is where the profession needs to focus. The value is not in the mechanical act but in the clinical reasoning that surrounds it. If we define ourselves primarily by what our hands do, we are vulnerable. If we define ourselves by the judgment we bring to complex, individualized care, we are far more resilient.

AI as a Clinical Companion

One of the most immediate and practical applications of AI in clinical practice is documentation. For years, clinicians have complained that electronic health records turned them into data entry clerks. Time that should be spent with patients is instead spent typing, clicking, and navigating screens.


AI-powered scribes promise to change this. The idea is simple: the AI listens to the patient encounter, transcribes the conversation, and generates a structured note in real time. The clinician reviews and approves the note, but the cognitive burden of documentation is dramatically reduced.


I have colleagues who swear by these tools. They report feeling more present with patients, making better eye contact, and spending less time after hours finishing notes. The efficiency gains are real, and for many, the technology feels like a net positive.


But there is friction. The notes require verification, and that verification takes time. Patients sometimes react negatively to the presence of AI in the room, either out of privacy concerns or a sense that the interaction is being depersonalized. And there is always the risk that the AI misinterprets something. A key symptom, a patient concern, a clinical nuance and the error slips through.


Still, this feels like one of the more promising uses of AI in clinical practice. It addresses a real problem, reduces administrative burden, and allows clinicians to focus on what matters most: the patient in front of them.


The key is to use it thoughtfully. The AI scribe is not the clinician. It is the note-taker. The judgment, the decision-making, the relationship—that remains human.

When AI Starts Diagnosing

If AI scribes feel like a manageable evolution, AI diagnostics feel more unsettling. We are already seeing chatbots capable of conducting preliminary assessments, generating differential diagnoses, and recommending next steps. Some systems are surprisingly good.


The implications are profound. If a patient can receive a competent preliminary assessment from an AI, what happens to low-acuity primary care visits? What happens to urgent care clinics? What happens to the triage nurse?

This is not science fiction. Companies are already deploying AI-driven primary care tools in some settings. Patients describe their symptoms, answer follow-up questions, and receive advice. For straightforward cases such as upper respiratory infections, minor injuries, common rashes—the systems perform reasonably well.

The concern is not that AI will replace all primary care but that it will replace enough of it to hollow out the profession. If the straightforward cases are handled by algorithms, what remains for human clinicians? The complex, the ambiguous, the patients who do not fit the pattern. This sounds like a recipe for burnout.

Moreover, not all patients are well-served by algorithmic triage. The elderly patient with vague symptoms, the anxious patient who needs reassurance, the person who does not speak English fluently. These individuals benefit from human judgment in ways that are difficult to quantify but easy to recognize.


The risk is that we build a two-tiered system: efficient, low-cost AI care for those with straightforward problems, and overburdened, expensive human care for everyone else. This may be economically rational, but it is not necessarily good medicine.

Skill Still Matters

In the face of automation, there is a temptation to argue that all clinical work is equally vulnerable or equally safe. Neither is true. Some skills are harder to replicate than others, and some roles are more easily reduced to algorithms.


Manual therapy, for example, is a valuable service, but it is largely procedural. The therapist applies pressure, works through muscle groups, and adjusts based on feedback. This is not simple, it requires training and skill—but it is also the kind of task that robotics can learn.


Clinical assessment, diagnosis, and treatment planning are different. They require integrating information from multiple sources, recognizing patterns across cases, and making judgments in the face of uncertainty. They require adapting to the individual patient, not just their symptoms.


For hands-on professions to remain resilient, they must emphasize what is hardest to automate. A chiropractor who positions themselves primarily as someone who "cracks backs" is far more vulnerable than one who positions themselves as an expert in musculoskeletal diagnosis and individualized care. A chiropractor who focuses on exercise and rehab prescription is less vulnerable than one who focuses on passive modalities.


The profession's task is to differentiate itself not by the tasks it performs but by the judgment it brings. Skill still matters, but only if it is paired with expertise that goes beyond the mechanical act.

The Robot That Adjusts You

Massage robots already exist. They are not yet sophisticated, but they work. You can buy a chair that kneads your back, a device that rolls along your muscles, a system that delivers percussive therapy. The quality is variable, but the technology is improving.


It is not difficult to imagine the next step: a robotic system capable of delivering spinal manipulation. The technology for applying controlled force already exists. The sensors for assessing joint resistance are being developed. The algorithms for determining optimal force and angle are within reach.


The question is not whether such a system could be built but whether people would trust it. Would you allow a robot to adjust your spine? Would your patients?

For some, the answer will be yes—especially if the robot is cheaper, faster, and more accessible than a human clinician. For others, the answer will be no, at least for now. Trust is built over time, and it will take years before people are comfortable with autonomous robotic care.


But cost is a powerful force. If a robotic adjustment costs a fraction of a human one and performs adequately, economics will drive adoption. The profession cannot rely on patient preference alone to maintain its relevance.

This is why the emphasis must shift from what we do to why we do it. The value is not in the mechanical act but in the assessment that determines whether the act is appropriate. A robot might deliver an adjustment, but can it perform the differential diagnosis that decides whether the patient needs an adjustment, a referral, or reassurance?


The profession that survives is the one that positions itself as the expert guiding the technology, not the one replaced by it.

Holding the Line

The storm is here. AI is not a distant threat or a far-off possibility. It is reshaping medicine now, and the pace of change is only accelerating. For those of us in hands-on professions, the temptation has been to believe that we are insulated—that touch, judgment, and human connection are inherently resistant to automation.


But history teaches us humility. Radiology was supposed to vanish. Translators were supposed to be obsolete. Accountants were supposed to be replaced by software. In each case, the predictions were overconfident, but the professions were still transformed. Some adapted. Some thrived. Some struggled.


The path forward is not to resist AI but to integrate it thoughtfully. Use it where it excels: documentation, information retrieval, pattern recognition. But do not surrender the work that makes you irreplaceable: judgment, synthesis, individualized care.


The brain needs friction. The profession needs clinicians who can think independently, who can function without their tools, who can recognize when the algorithm is wrong. AI should make us more capable, not more passive.

Hands still matter. Judgment still matters. But they matter most when paired with the wisdom to know when to act and when to wait, when to trust the machine and when to override it.


The question is not whether we will face the storm but whether we will still recognize what makes us human when we do. The profession that holds that line—that insists on being partners with technology rather than subordinates to it—is the one that will endure.


Have a wonderful week,


Lucas

Man smiling outdoors, wearing a tan shirt with sunglasses hanging from the collar. Green foliage and blurred flowers in the background.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page