Press release
Smart, Smooth, and Sometimes Dangerously Wrong: AI's Hidden Risks in Medicine
As millions of people and thousands of clinicians begin using general-purpose AI tools (such as ChatGPT, Grok, Gemini, and others) for medical questions and image interpretation, new case reports and peer-reviewed studies show these systems can confidently produce convincing but false medical information - in some cases directly misleading patients and contributing to harm."AI is already a powerful assistant. But multiple recent examples make one point painfully clear: when an AI sounds authoritative, it is not the same as being clinically correct," said Dr. Neel Navinkumar Patel, cardiovascular medicine fellow at the University of Tennessee, and researcher in AI and digital health. "Hospitals and regulators must insist on human-in-the-loop systems and clear labeling of what these models can and cannot safely do."
Key Real-World Examples (What Happened)
1) A patient hospitalized after following ChatGPT's "diet" advice A case published in Annals of Internal Medicine: Clinical Cases describes a 60-year-old man who, after consulting ChatGPT for dietary advice, replaced table salt (sodium chloride) with sodium bromide and developed bromide toxicity ("bromism") with paranoia, hallucinations, and hospitalization. The authors demonstrated that some prompts produced responses naming bromide as a chloride substitute without adequate medical warnings - an outcome that likely contributed to real harm. Why it matters: This is a documented, peer-reviewed instance where AI-derived advice was linked to direct patient harm, not just a hypothetical risk.
Eichenberger E, Nguyen H, McDonald R, et al. A Case of Bromism Influenced by Use of Artificial Intelligence. Ann Intern Med Clin Cases. 2025;4(8). doi:10.7326/aimcc.2024.1260.
2) Researchers show chatbots can generate polished medical misinformation and fake citations A study in Annals of Internal Medicine found that major LLMs (OpenAI's GPT family, Google's Gemini, xAI's Grok, and others) can be manipulated to produce authoritative-sounding false medical advice - even inventing scientific citations to support fabricated claims. Only one model, trained with stronger safety constraints, resisted this behavior. Why it matters: AI outputs can include fabricated references and polished reasoning that appear verified - making misinformation far more persuasive and dangerous.
Li CW, Gao X, Ghorbani A, et al. Assessing the System-Instruction Vulnerabilities of Large Language Models to Malicious Conversion Into Health Disinformation Chatbots. Ann Intern Med. Published online June 24, 2025. doi:10.7326/ANNALS-24-03933.
3) AI model invented a non-existent brain structure ("basilar ganglia") Google's Med-Gemini (a healthcare-oriented version of Gemini) produced the term "basilar ganglia" - a nonexistent structure combining two distinct anatomical regions. The error appeared in launch materials and a research preprint, flagged publicly by neurologists. Google later edited its post and called it a typo, but the incident became a prominent example of "hallucination" in medicine. Why it matters: When AI invents anatomy or diagnoses, clinicians may overlook errors (automation bias), or downstream systems could propagate those mistakes. (Source: The Verge: "Google's Med-Gemini Hallucinated a Nonexistent Brain Structure." 2024.)
4) Viral user posts and clinician tests show image-analysis failures (Grok, ChatGPT, Gemini) After public encouragement to upload X-rays and MRIs, users posted examples where Elon Musk's Grok flagged fractures or abnormalities - some celebrated as "AI diagnoses." Radiologists later testing Grok and other chatbots found inconsistent performance, false positives, and missed findings. Independent clinical evaluations concluded these tools are not reliable replacements for certified radiology workflows. Why it matters: Consumer anecdotes highlight potential, but clinical rollout must be evidence-based. (Source: STAT News: "AI Chatbots and Medical Imaging: Radiologists Warn of Misdiagnosis Risk." 2024.)
5) Studies show general LLMs perform poorly on diagnostic tasks (ECG/CXR) Peer-reviewed work testing multimodal LLMs on ECG and imaging interpretation shows major limitations. For example, JMIR studies evaluating ChatGPT-4V on ECG interpretation reported low accuracy in visually driven diagnoses, and other benchmarks showed perceptual failures (orientation, contrast, basic checks) unacceptable for clinical use. Why it matters: Clinicians should not treat off-the-shelf chatbots as medical-grade interpreters without regulatory clearance. ( JMIR Med Inform ; 2024.)
How These Errors Happen (Short Explainer)
* LLMs predict text, not truth. They are designed to generate statistically likely continuations of text, not verify accuracy - leading to fluent but false statements ("hallucinations"). (Reuters)
* Visual reasoning gaps. Even image-capable models may misread orientation or labeling because they weren't built for clinical imaging. ( arXiv.org )
* Prompt manipulation. Researchers showed that simple instruction changes can make general models output dangerous falsehoods.
Recommendations For Patients & the Public
* Treat chatbots as informational only - never for diagnosis, medication changes, or urgent care decisions.
* Save chat logs and show them to your clinician - a confident AI diagnosis does not mean it's correct.
* Require human sign-off for all AI-generated diagnostic outputs - this is also supported by the American Medical Association.
* Validate models locally before use. Rely on FDA-cleared systems when available. (FDA guidance)
* Build policies defining when and how staff may use chatbots, and document AI involvement in patient records.
* Require clear labeling when LLMs are part of any medical workflow and mandate transparent performance metrics with post-market surveillance.
* Mandate adversarial testing to detect vulnerabilities that allow health disinformation or unsafe recommendations.
For Clinicians & Hospital LeadersFor Regulators & Industry
"Generative AI is already reshaping medicine but not yet in a way that guarantees patient safety when it comes to diagnosis," said Dr. Neel N. Patel. "These recent, documented failures show the cost of over-trusting fluent AI. The right path is responsible augmentation: transparent tools, rigorous validation, human sign-off, and stronger regulation so AI helps clinicians not mislead patients."
References
* Eichenberger E, Nguyen H, McDonald R, et al. A Case of Bromism Influenced by Use of Artificial Intelligence. Ann Intern Med Clin Cases. 2025;4(8). doi:10.7326/aimcc.2024.1260.
* Li CW, Gao X, Ghorbani A, et al. Assessing the System-Instruction Vulnerabilities of Large Language Models to Malicious Conversion Into Health Disinformation Chatbots. Ann Intern Med. Published online June 24, 2025. doi:10.7326/ANNALS-24-03933.
* The Verge. "Google's Med-Gemini Hallucinated a Nonexistent Brain Structure." 2024.
* STAT News. "AI Chatbots and Medical Imaging: Radiologists Warn of Misdiagnosis Risk." 2024.
* JMIR Med Inform. "Evaluation of GPT-4V on ECG Interpretation Tasks." 2024.
Media Contact
Neel N Patel, MD
Department of Cardiovascular Medicine
University of Tennessee Health Science Center at Nashville
St. Thomas Heart Institute / Ascension St. Thomas Hospital, Nashville, TN, USA
(332) 213-7902
neelnavinkumarpatel@gmail.com
Media Contact
Contact Person: Neel N Patel, MD
Email: Send Email [http://www.universalpressrelease.com/?pr=smart-smooth-and-sometimes-dangerously-wrong-ais-hidden-risks-in-medicine]
City: Nashville
State: TENNESSEE
Country: United States
Website: https://www.linkedin.com/in/neel-navinkumar-patel-md
Legal Disclaimer: Information contained on this page is provided by an independent third-party content provider. GetNews makes no warranties or responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you are affiliated with this article or have any complaints or copyright issues related to this article and would like it to be removed, please contact retract@swscontact.com
This release was published on openPR.
Permanent link to this press release:
Copy
Please set a link in the press area of your homepage to this press release on openPR. openPR disclaims liability for any content contained in this release.
You can edit or delete your press release Smart, Smooth, and Sometimes Dangerously Wrong: AI's Hidden Risks in Medicine here
News-ID: 4225976 • Views: …
More Releases from Getnews

Gen Z Is Reading More Than Ever, Says Book Publicist Scott Lorenz
Plymouth, MI, USA - October 15, 2025 - In a world dominated by screens, short-form videos, and constant stimulation, it may come as a surprise that Gen Z is reading more than ever. Born between 1997 and 2012, this generation is reshaping the future of book marketing [https://westwindbookmarketing.com/] and publishing.
Image: https://www.globalnewslines.com/uploads/2025/10/6fe80af49360642e146bb5979d8bbda6.jpg
According to the American Library Association, "Gen Z is using libraries at higher rates compared to older generations." So, what…

Portus Recognized for Collaboration at Onto Innovation's 2025 Executive Supplier …
Dublin, CA, USA - October 15, 2025 - Portus, a leader in advanced manufacturing analytics, announced today that it has been honored with the Collaboration Award at Onto Innovation's 2025 Executive Supplier Conference, held in Santa Clara, California.
Image: https://www.globalnewslines.com/uploads/2025/10/e8b6d92594845e8a0bc83f3d6ac5bb23.jpg
The conference, themed "Redefining Resilience: Global Partnerships in a Changing World," brought together Onto's strategic suppliers to celebrate excellence in innovation, quality, collaboration, and partnership within the global semiconductor ecosystem.
Onto Innovation CEO…

Proregulations Guides Cosmetics Market Entry Across Divergent Japan and South Ko …
Professional compliance consulting firm Proregulations releases latest guide on market entry for cosmetics in Japan and South Korea.
New York, USA - October 15, 2025 - Proregulations today released its latest guide on market entry requirements for cosmetics in Japan and South Korea. The latest regulatory updates highlight the contrast between the two nations' market access frameworks. To address these variations, Proregulations offers specialized market entry solutions to assist companies in…

BatchOutput for Microsoft Excel PDF Workflows Now Supports macOS Tahoe
Zevrix Solutions announces BatchOutput XLS 2.5.21, a compatibility update to company's output automation solution for Microsoft Excel. The only tool of its kind for Excel users on the Mac market, BatchOutput automates printing and PDF exporting of multiple spreadsheets. The app helps users carry out professional PDF operations directly from Excel while offering variable PDF names, PDF security, color conversion, and more. The new version adds support for the recently…
More Releases for Med
Scottsdale Med Spa Unveils Comprehensive Injectable Treatment Services
Scottsdale Med Spa, a leading aesthetic treatment center located in the heart of Old Town Scottsdale, is proud to announce its comprehensive range of injectable treatments designed to help clients refresh, renew, and rejuvenate their appearance without surgery. Under the expert guidance of Dr. Vincent Marino, MD, and skilled injection specialist Melissa Newman, BSN, RN, CLT, the medical spa offers personalized care using the latest techniques in aesthetic medicine.
SCOTTSDALE, AZ…
Fetal Monitor Transducer market: New Prospects to Emerge by 2028 | Unimed Medica …
"
The Fetal Monitor Transducer global market is thoroughly researched in this report, noting important aspects like market competition, global and regional growth, market segmentation and market structure. The report author analysts have estimated the size of the global market in terms of value and volume using the latest research tools and techniques. The report also includes estimates for market share, revenue, production, consumption, gross profit margin, CAGR, and other key…
CAS MED SPA OFFERS COOLSCULPTING PROMOTION
MARIETTA, GEORGIA- APRIL 29, 2019- CAS MED SPA is excited to announce its CoolSculpting service, as well as a money saving promotional code. Available on their website at https://cs.cassmedspa.com/cas-coolsculpting-best-offer, patients can save $200 off their treatment. Going on through SPRING 2019, CAS MED SPA offers their best deal ever, up to $1500 with the purchase of six or more treatments!
CoolSculpting is a revolutionary way to eliminate body fat. Unlike liposuction…
EuropeSpa med quality standards published in book form
Internationally valid quality and safety criteria for health and medical wellness listed in a comprehensive compendium
Stuttgart/Wiesbaden, 3 September 2012 | The European Spas Association has published the EuropeSpa med criteria for medical spas and medical wellness providers. The book Quality Standard for Medical Spas and Medical Wellness Providers in Europe was unveiled today by publishing house Schweizerbart Verlag (www.schweizerbart.de) in Stuttgart.
For the first time, this compendium sets out about…
Nano Med Tech Hires Stateside Sales Team
Nano Med Tech, a newly emerging leader in nano-technological cancer research and treatment, announced that its sales team for America is in place and that stateside medical marketing will commence, soon.
With a Commercial Director and eight account managers in place,
Nano Med Tech will now seek to make its technology available in
medical institutions throughout the United States. The organization has received USFDA approval for several patented medical devices, which utilize advanced…
Nano Med Tech Announces Launch of New Website
Nano Med Tech, which has rapidly expanded its operations in the wake of recent progress in the area of malignant cell targeting, is pleased to announce the launch of its new website. The organization, which is on the cutting edge of nano-technology research and application, has officially launched its redesigned and updated website.
“While it took awhile to get a product we were ultimately happy with, we couldn’t be…