Get a weekly digest of the latest psychedelic research, new studies, and platform updates delivered to your inbox.
No spam, ever. Unsubscribe anytime.
As direct-to-consumer ketamine prescriptions obtained via telehealth in the United States, along with psychedelic support, proliferate, some patients are turning to general-purpose chatbots as de facto "therapists" in unmonitored settings. These AI-guided altered-state sessions carry significant ethical, clinical, and legal risks. Current AI tools are not licensed professionals, lack consistent reliability, and cannot provide emergency support. Combined with mind-altering drugs, their unpredictability can precipitate harm rather than relieve distress. The convergence of these innovations (psychedelic therapeutics and generative AI) creates an unregulated frontier in mental health care. Traditional safeguards in psychedelic therapy rely on "set and setting," a supportive mindset and environment guided by trained humans. When that role is replaced by AI, both are fundamentally disrupted. Clinicians should recognize that patients are already experimenting with AI in this way, often without disclosure or oversight. When screening for psychedelic use or discussing at-home treatment, providers should anticipate the risks of inadequate monitoring, unsafe crisis response, and blurred therapeutic boundaries with AI chatbots. Even if clinicians do not endorse or recommend AI tools, they may bear responsibility once patient use is known. Providers should therefore counsel patients about these dangers, document safety discussions, and routinely ask about AI use as part of risk assessment.
Sign in to join the discussion.