Would you hire artificial intelligence as your weight-loss coach? If you answered yes, perhaps think again. Though continuously evolving rapidly, AI has already demonstrated nearly superhuman capabilities. It should now come with no surprise that AI has broken into the health and wellness industry, now offering diet plans, personalized coaching and meal tracking to its users. In addition to the more mainstream models ChatGPT and Google Gemini, weight-loss apps such as MyFitnessPal, Weight Mate and Noom have surged in popularity. On the surface, the results appear promising. In a study published in the medical journal Obesity Surgery, four professionals in the medical and technology field discovered that SureMediks, an AI-powered platform consisting of a mobile app, resulted in users losing 14% body weight on average. Despite having different body sizes and types, 99% of users were able to lose more than 5% of body weight over 24 weeks. However impressive these results are, the numbers shouldn't blind us to the risks. Policymakers must confront regulation, accuracy and ethical transparency; without proper and thorough oversight, users may unknowingly receive misleading guidance or fall into unhealthy habits. AI isn't necessarily able to track calories consistently or correctly. A University of Sydney study gathered startling discrepancies after evaluating 18 AI-driven apps that manage weight loss: AI-integrated food apps inaccurately estimated the energy content of Asian dishes and struggled with mixed dishes that contain a variety of components. Relying on flawed estimates, AI poses the risks of users making misinformed or drastic dietary decisions; such miscalculations may create health risks and ultimately hinder the progress of those seeking AI assistance to improve their health. Generative AI platforms are also not immune to flaws. Tools such as Google's AI Overview are a measure of what is popular rather than what is the best evidence. It provides responses gathered from the internet regardless of relevant expertise, and additionally generates varying responses on different days. For example, the Center for Science in the Public Interest faced answers to questions regarding healthy food and diet that were contradicted by scholarly literature. Similarly, while able to offer nutrition tips and meal plans, ChatGPT does not account for personal health profiles or histories. Everyone has varying health and nutritional needs depending on their genetic makeup, health condition, cultural context, daily habits and more, some of which may fluctuate on a daily basis. The absence of conducting medical evaluations prior to providing dietary suggestions may cause more harm than good for individuals, especially those with comorbidities or specific restrictions. Even more troubling is AI's potential role in facilitating harmful diet-culture mentality. The National Eating Disorders Association replaced its national helpline with a chatbot named Tessa to support those struggling with eating disorders. While testing responses however, Tessa's "healthy eating habits" recommendations mirrored misleading weight-loss rhetoric that could promote unhealthy or disordered eating patterns, even to those who aren't vulnerable. This concern is further underscored by the Center for Countering Digital Hate. After testing six popular generative AI chatbots, the center's researchers found that they generated dangerous content ranging from advice on extreme dieting to ways to meet unhealthy body standards. With AI proven capable of worsening disordered thinking and reinforcing a culture of diet and thinness disguised as wellness, the implications are real, severe and ready to impact millions of users. Leveraging AI to aid weight loss and improve health has undeniable potential, but this can easily be shadowed by its inherent risks. While some platforms have banned AI tools from providing disordered eating content, others still have vague policies. To eliminate this ambiguity and address potential harm, policymakers must usher regulatory bodies such as the Food and Drug Administration to expand their oversight and include wellness and health AI devices, holding them to the same ethical and safety standards as medical tools. At the same time, the FDA ought to consider mandating AI platforms to explicitly label the extent to which AI-generated responses are approved by health professionals and flag potentially harmful effects. Only after enforcing stronger regulations, demanding transparency and pushing ethical responsibility can we ensure that AI supports healing and well-being instead of undermining it. Erin J. Choi is a sophomore at Brown University studying international and public affairs on a pre-law track.