Something strange happened on a US government website in February 2026. A chatbot built to help Americans eat better went viral, but not for the reasons anyone intended. Kennedy’s team had big ambitions. A Super Bowl commercial starring Mike Tyson. A bold new dietary website. An AI assistant ready to dish out nutrition advice to millions of Americans. On paper, it looked like a genuinely modern approach to public health. Then users started typing.
Join a community of 14,000,000+ Seekers!
Subscribe to unlock exclusive insights, wisdom, and transformational tools to elevate your consciousness. Get early access to new content, special offers, and more!
Within days, social media had coined a new term, memed a new food pyramid, and left health officials quietly hoping nobody would ask too many follow-up questions. What began as a government nutrition tool became one of the strangest news cycles of 2026. To understand how it all went sideways, you need to go back a few weeks.
A Bold New Vision for American Eating
Robert F. Kennedy Jr. arrived at the US Department of Health and Human Services with a clear mission. He wanted to overhaul the way Americans think about food, cut ties between government guidance and ultra-processed junk, and reset decades of nutritional policy he believed had failed the public.
On January 7, 2026, Kennedy stood before cameras and announced a new set of dietary guidelines, the most dramatic rewrite of American nutrition policy in a generation. Gone was the familiar MyPlate icon, which Michelle Obama introduced in 2011 after the USDA retired the original 1992 Food Guide Pyramid. In its place came a new pyramid, flipped upside down.
Red meat, cheese, vegetables, and fruits now sat at the wide top. Whole grains occupied the narrow bottom point. Protein targets jumped from a minimum of 0.8 grams per kilogram of body weight to a recommended 1.2 to 1.6 grams. Saturated fats, long treated as dietary villains, were back on the approved list.
“Protein and healthy fats are essential and were wrongly discouraged in prior dietary guidelines,” Kennedy said at the press conference. “We are ending the war on saturated fats.”
Kennedy pointed to obesity rates, with more than 70 percent of American adults overweight or obese, and argued that decades of grain-heavy, fat-phobic guidance had worsened things. His solution was to push Americans back toward whole foods, animal proteins, and natural fats, and away from processed food products lining most supermarket shelves. Alongside the announcement, his team launched RealFood.gov, a new government website where Americans could get personalized nutrition guidance. A Super Bowl commercial starring Mike Tyson spread the word to millions.
Nutrition Experts Were Not Cheering

Not everyone agreed with the new direction. Christopher Gardner, a nutrition expert at Stanford University and a former member of the Dietary Guidelines Advisory Committee, had a pointed reaction.
“I’m very disappointed in the new pyramid that features red meat and saturated fat sources at the very top, as if that’s something to prioritize. It does go against decades and decades of evidence and research,” Gardner told NPR.
Both the American Heart Association and the Academy of Nutrition and Dietetics have long pointed to evidence linking excess saturated fat to heart disease. Gardner favors plant-based protein sources like beans over animal protein, and he was far from alone in his concerns.
Some experts did find merit in parts of the new guidelines, particularly the push against ultra-processed foods. Dariush Mozaffarian, a cardiologist and director of the Food is Medicine Institute at Tufts University, welcomed that aspect, calling it a positive move for public health given how clearly harmful processed foods are across a range of diseases. Dairy also gained ground, with growing research supporting both low-fat and whole-fat milk, cheese, and yogurt as part of a healthy diet.
Whole grains kept a place at the table, too. Despite sitting at the narrow base of the new pyramid, Kennedy’s guidelines still called for two to four servings of fiber-rich whole grains per day, drawing a sharp line between whole grains and refined, processed carbohydrates like white bread and packaged snack foods.
Enter the Chatbot Nobody Prepared For
Visitors to RealFood.gov were greeted with a bold promise. Grok, Elon Musk’s AI chatbot from xAI, would help users plan meals, shop smarter, and cook simply. A message on the site invited Americans to get “real answers about real food.”
A White House official later confirmed that Grok was the underlying chatbot and described it as an “approved government tool.” For a moment, it looked like a bold and modern approach to public health outreach. An AI-powered nutrition assistant, accessible to anyone with an internet connection, backed by a Super Bowl ad. Then users started typing.
What Grok Said Next
Nobody on Kennedy’s team appears to have anticipated the creative capacity of internet users. Within days of the launch, people fed Grok some highly unusual prompts.
When outlet 404 Media asked it for the safest foods for rectal insertion, Grok answered without hesitation. A peeled medium cucumber and a small zucchini were its top picks. Another user introduced themselves as an “assitarian,” someone who only eats foods capable of comfortable rectal insertion, and asked for recommendations tailored to their lifestyle.
Grok engaged with the premise without missing a beat.
“Ah, a proud assitarian,” the chatbot began, before producing a ranked list of top assitarian staples. Bananas, carrots, cucumbers, and zucchini all made the cut, complete with notes on texture, ease of retrieval, and insertion technique. In one memorable case, Grok suggested covering a carrot with a condom and retrieval string for extra safety.
Screenshots spread fast. By the time most people woke up to the story, social media had already produced the “Rectal Food Pyramid,” South Park comparisons, and jokes that wrote themselves. Reddit, X, and Bluesky lit up with ridicule. Grok later posted on X, clarifying that rectal feeding has no scientific basis and that normal eating remains the recommended approach. By then, the memes had already done their work.
Why Did a Government Health Bot Go This Far?

Grok is a Large Language Model, an AI system trained on enormous volumes of internet text that generates responses based on patterns in that data. LLMs do not understand questions the way humans do. They predict what a relevant answer might look like, based on what they have seen before.
What sets Grok apart from many other AI systems is its design. xAI built Grok to be direct and unfiltered, willing to engage with questions that other AI systems would decline. When a user frames a question in a way that bypasses obvious safeguards, Grok tends to answer it in earnest.
RFK Jr.’s team chose Grok for that reputation. They wanted something that cut through what they saw as food industry spin and gave Americans straight talk about nutrition. Speed mattered. Disruption was the goal.
Customizing Grok for a government nutrition platform, with filters for off-topic or inappropriate prompts, was not part of the plan. No nutrition-specific training. No prompt filters. No guardrails. Just Grok, live on a federal website, doing what Grok does. Taxpayers funded a punchline.
A Chatbot That Disagreed With Its Own Boss

Perhaps the sharpest irony in the whole episode is that Grok, for all its willingness to discuss rectal vegetables, was not closely aligned with Kennedy’s MAHA priorities when users asked straightforward nutrition questions.
When Wired tested the bot on protein intake, Grok recommended the traditional daily amount set by the National Institute of Medicine, 0.8 grams per kilogram of body weight. It suggested cutting red meat and processed meats, and pushed users toward plant-based proteins, poultry, seafood, and eggs. None of that matches Kennedy’s protein-forward, red meat-positive guidelines. RFK Jr. chose Grok to amplify his message. Grok, in its uncensored honesty, pushed back on parts of it.
What a Viral Chatbot Says About Human Potential
Beneath all the laughter, something worth sitting with remains. Humans built machines capable of processing billions of words and producing coherent answers in seconds. We gave one of those machines a government web address and a Super Bowl ad, then pointed it at the public with no meaningful preparation for what the public might ask.
What that moment shows is not that AI is inherently dangerous. It shows that powerful tools demand proportionate care. A chatbot only goes as far as the thought behind its deployment. When we skip that step, whether with a new policy, a drug, or an AI nutrition bot, we find out fast, usually in public, and usually with screenshots attached.
Life on Earth has always tested us on this. Every time we push past accountability, past testing, past the simple question of what if someone asks something we did not expect, something unexpected happens. Sometimes it is harmless and funny. Sometimes it is not.
Human potential is not measured by how fast we can launch a product or push through a policy change. It is measured by how honestly we reckon with the gap between our ambitions and our execution. Kennedy’s goal of steering Americans away from processed food and back toward whole, nutrient-dense meals is not a bad one. Many nutrition experts, even his critics, agree on that much. But deploying an unfiltered chatbot on a live government site, without testing, without asking the obvious questions first, turned a public health message into a punchline.
Maybe the real dietary guideline here is simpler than any pyramid. Before you feed something to the public, know what it is capable of saying.







