Marc Iskowitz interviews Dr. Brian Anderson, CEO/co-founder of Coalition for Health AI (CHAI), about the group’s progress on health-AI ethics rules, as well as a deal to launch its inaugural summit at HLTH later this year. 

Lecia Bushak discusses the “dark money” group using social media to lobby for passage of a bill limiting U.S. business with certain Chinese biotechs. 

Plus, Jack O’Brien takes us through Japanese competitive eater Takeru Kobayashi’s retirement due to health concerns in our Trends segment, along with Mike Tyson’s medical emergency and golfer Grayson Murray’s recent suicide.

Music by Sixième Son

Check us out at:

Follow us: 
YouTube: @MMM-online
TikTok: @MMMnews 
Instagram: @MMMnewsonline
Twitter/X: @MMMnews
LinkedIn: MM+M

To read more of the most timely, balanced and original reporting in medical marketing, subscribe here.

The 988 Suicide and Crisis Lifeline is a hotline for individuals in crisis or for those looking to help someone else. To speak with a trained listener, call 988. Visit for crisis chat services or for more information.

Note: The MM+M Podcast uses speech-recognition software to generate transcripts, which may contain errors. Please use the transcript as a tool but check the corresponding audio before quoting the podcast.

Hey it’s Marc… Improving quality of care without compromising the safety of patients is the goal when introducing any innovation into healthcare. All the more so with artificial intelligence. Especially in this space, where the tools are evolving so rapidly, it’s not hard to envision how tech without guardrails and guidelines could lead to potential patient harm. Likewise, we need regulations that don’t stifle progress. Enter the group – Coalition for Health AI – or CHAI – established in 2022. CHAI’s QA Labs are being set up to evaluate AI models for use in healthcare.  That includes making sure any such innovations serve the whole population, not just the wealthy.  We’ve seen examples of how AI can exacerbate existing class divides and racial biases.  HCPs and marketers alike are right to be cautious about adopting under-regulated tools without an independent source of validation. CHAI is designed to give all stakeholders more confidence in this nascent technology. This week on the show – CHAI’s CEO and co-founder Dr Brian Anderson explains how the group is striking a balance as it champions for responsible use of AI in health and medical marketing, including his embrace of a novel approach to partnership.  And Lecia’s here with a health policy update… Hey Marc, today I’ll discuss the emergence of a new dark money group dubbed “Defend our DNA” that has launched social media ads supporting the BIOSECURE Act. And Jack, what’s trending on healthcare social media this week? This week, we’re talking about Kobayashi’s diminished hunger, Mike Tyson’s ulcer flare-up and Grayson Murray’s death by suicide.

Dr. Anderson, how are you? And welcome to the mmm podcast. Thanks for having me here Mark. I’m doing quite well, you know, I saw the peace in nature medicine that you wrote a really nice sort of pre-sea on the group and his mission. It doesn’t really go into the origins of chai. So can you tell us how the group got started?

Yeah, so it really started in the pandemic a number of private sector organizations began coming together that were not traditional inherently competitive groups coming together to do the work that needed to be done in the pandemic and I think we really appreciated the kind of synergy and the kind of impact that a group of pharmaceutical companies right that were very competitive at one point could do together a group of technology companies like Microsoft and Google coming together could do together and we asked a very basic question, you know, is there something more that we can be doing beyond the pandemic that

Um brings together these non-traditional Partners to really have an impact in the health space and that was really the origins of chai. We asked ourselves, you know in the AI space do we have consensus around what trustworthy responsible AI looks like at a technically specific level that would inform a software developer or mlops manager and the answer was no

and you know helping the consequential space that it is. We thought that was a really important space for us to focus on so that was that was how we got started.

Sure, it’s really a very consequential

space indeed. How would you define responsible Ai and what’s the current state of affairs? What’s your question? So you

responsible AI sounds you know like a grandiose big concept it really boils down to

Several pretty important principles when you think about the development of a brilliant any tool you want to developed in a way that is safe and effective that doesn’t break that doesn’t harm people.

And so these these principles within responsible AI really kind of get to that concept of how do we build a tool?

That is safe.

That is trustworthy that is dependable and safe. And so, you know, there are Concepts like transparency, right? So how do you know where the tool was built and how it was built like what are the materials that went into it if we’re talking about a physical object. So an AI, you know Concepts around transparency come down to

disclosing the inputs that went into how the model was trained Concepts like fairness, right? And so we want our models the AI tools to perform well in ways that have

No, or little unjustified bias.

And so ensuring that concept of fairness and how the models trained and how it’s deployed and how it’s used.

Other Concepts like reliability, right? So you use a model once or you use a tool once in a performance a certain way you hope that it’s going to you know perform the same way the second time in AI That’s a big point right as an example in generative AI with llms you give a llm one prompt it answers one way you give the llm the same prompt. It may impact answer very different way. And so how do we think about reliability in that sense and then other Concepts like robustness or

Its safety its ability to ensure privacy and security. These are all Concepts within responsible Ai and so so at a high level it is ensuring that the AI aligns with all those principles of responsible AI

sure thanks for that robust definition and you can’t have Innovation with that, you know coming together, you know with without collaboration tell us about the news that you just reported last week about these strategic partnership to co-locate your first Global Summit and working group meeting at the hlth conference later this year.

Yeah, so when chai started it actually started three years ago as a coalition of the willing and it started with 8 or 10 organizations coming out of this pandemic that I mentioned and then it quickly grew the US government joined right onc FDA hdss more broadly started participating and from there the number of organizations participating grew to approximately

500 different organizations thousands of individuals 2500 unique organizations and very active in this space and fundamentally What Chai is about is bringing organizations individuals in our Health Community together to come up with that common framework technical in nature. That is what we would call a best practice framework for generative AI or predictive Ai and how you measure alignment of a model to that best practice or misalignment.

And to do that, you know in part we need to be able to come together physically to have these working groups. We can do it virtually over Zoom where Ms team Microsoft teams, but one of the things that we’re really excited about is bringing our group together to begin ensuring that we have a common definition a common understanding about what these means because that is fundamental if we want to take that definition that framework and actually begin implemented it and adopting it and so we’re really excited and in the fall time to partner with health with HLT H to have the launch of our initial public draft version of the Frameworks that will be announced in shared publicly at Hell in October.

With the broader community. So not just within the tri Community will be developing these products these technical products over the summertime in a series of five working groups that we launched last week at Stanford.

And then in the fall time those technical products will be ready for public comment, and we couldn’t imagine a better stage.

To to share that publicly then it helped where we’re going to have the vast majority of our community members already there.

And so the opportunity to come together, it seemed very natural and to have a stage and a platform to share that publicly with Society is really what we’re hoping for. We don’t want to develop these technical Frameworks in a vacuum. We want to share them widely. We want to have it as transparently shared as possible to get the kinds of important necessary feedback that we need from as diverse a set of stakeholders as we can write that it’s not just the tech vendors that it’s not just Health Systems. It’s medical device manufacturers life science formal suitable companies. It’s patient Community Advocates all being able to have eyes on these products or creating and offering meaningful feedback so we can improve it and we’re excited to be doing that at all.

Excellent sounds like a great stage in ground indeed. And so you mentioned that first draft of assurance is framework. What’s in the goal is that again? You developed it over the summer you’re going to develop it over the summer and then it’s going to be ready for a public comment and health provide that venue to do that. What’s the plan after that and talk about the scope of that framework? Is it just covering payers providers, you know, we’ll cover direct consumer marketing for instance

a great question. So, you know the the focus of the work over the summer is really to lay the foundation for what responsible AI looks like at a technical level across a models like cycle from its development to its deployment and to its maintenance and monitoring.

And that fundamentally should if we do it right address a number of what I’ll call stakeholders specific use cases.

You know the ones you mentioned payers medical device manufacturers a direct to consumer or direct to Patient use case certainly in the health system clinical decision support use case. We want to be able to set the foundation.

For you know, some of the very kind of important highly consequential uses of AI.

It’s now.

That is just a foundation.

The next step and I think one of the things we’re really excited about in chive is supporting a network of quality assurance Labs. You may ask. What is a quality assurance lab?

Back in 2021 the White House published a proposed AI Bill of Rights. And in that AI Bill writes it described the need for independent evaluation of models for their performance.

And this is not something that is unique in a variety of different sectors. We have these kinds of quality assurance Labs. You may be familiar with Consumer Reports the Insurance Institute for Highway Safety and all the famous car crash tests in the dummies in the airbags, you know, these are examples where you have an independent entity that is evaluating the claims of the manufacturer for its product to be safe and effective the underwriters lab of America. You probably all of us have lamps in our home to have a little sticker on the other side that shows that that lamp was tested to ensure that it wasn’t going to short-circuit and create a fire or hurt someone in our house. We want to support a network of quality assurance labs in chai.

For AI to do that though. We need to have a common understanding of what the definition of good looks like. What responsible AI looks like and we also need to have a common set of metrics. How do you evaluate AI like these are very basic questions that many of the listeners may be surprised. We don’t have a common agreement on how do you measure bias in an llm? We don’t know how to do that yet. We don’t have agreement on how to do that. These working groups the announcement in the fall. We hope to have an initial draft version of that. And so what comes next Mark after that is really begin to support and launch this network of assurance labs to enable them to say, okay. Here’s how industry came together. Here’s the definition of what responsible AI looks like at a technical level. And here’s the metrics that we can use as Assurance labs to begin measuring independently a models performance. Does it align with this definition or not?

And sharing that transparently with Society. So we want to build a network of assurance Labs that will help validate models the labs will

And then transparently share those results with Society you believe that that’s one of the important steps. We need to take to be able to build more trust and how AI is deployed now, you mentioned, you know, the marketing space. It’s a great example particularly in health. We’re all familiar with many of the examples of commercials on our TVs or ads on the internet that you know lay a specific Claim about a specific therapeutic or a specific drug or a lifestyle choice and oftentimes as an example. If you’re on a website that is in the marketing space for Healthcare company and llm may look to engage with you and help you navigate making a decision. Should I talk to my doctor about taking that drug or not?

Or should I make you know particular lifestyle choices in you know asked my doctor if I you know, I want a prescription for this truck. Those can be highly influential AI tools that if done inappropriately.

Or incorrectly can lead an individual down the wrong decision tree making the wrong or perhaps a more dangerous choice or advocating for the individual to do something. That would be unethical or potentially biased.

Um, and that is that is a real life example, you can go to any number of websites and interact with an llm that is doing that today.

We don’t currently.

Have a framework on how to evaluate the ethics.

Or the fairness of these kinds of llms. It is really important when we think about as an example.

If I am creating a drug to help individuals with breast cancer.

And the llm tool was trained on Highly Educated Urban Caucasian individuals, but wasn’t trained on Rural.

Southeast individuals from rural Mississippi or Appalachia the might be from African-American or Caucasian descent that may have different kinds of props that they put into that llm.

Those individuals may get a completely different answer than the Highly Educated person. They may be said, you know, the drug is not for you when in fact it is.

And so how we think about ensuring that we have the right kind of ethical framework the right kind of way of measuring an AI model alignment or misalignment to the concept of responsible AI even in the direct to Consumer space where you have marketers well-intentioned life science companies, you know doing wonderful and important and impactful work but ensuring that they’re doing it in a way that the AI is being built to serve all of us and not just the Highly Educated Urban individuals. That is the common substrate upon which these

models are trained.

Excellent. Alright, so those are kind of your as I see your twin goals one is you know, making sure we’re improving quality of care without compromising safety of patients and then all so we know we need to make sure that that these tools serve the whole population not just the wealthy and don’t exacerbate existing class divides and racial biases and you get some great.

Case studies of that overlap with the life sciences industry. So those are excellent.

I want to switch gears one more time and talk about Partnerships. When we reported on you earlier this year in our game changer supplement. We talked about the power of external Partnerships as a tool to drive Innovation kind of emerging as a central theme these days and I think chai really exemplifies that you’ve got backing from some really big groups as you mentioned Google the Mayo Clinic 2500 in total talk about maybe some of the highlights there and how you in your view of the importance of external Partnerships

to what you’re doing a great question. I mean the magic of chai is not within the chai or organization as a non-profit itself. It is fundamentally because of the partners that we bring together when we launched shy as I was saying right a lot of good work was being done is being done in how we think about responsible AI but it’s being done in silos at those organizations you mention right Google Mayo Microsoft startups 2500 organizations obviously have an interest in this space because for you know all intense purposes.

They’re doing something impactful in the health AI space but we aren’t doing yet. What we need to do is to come together and share all these, you know pieces of wisdom and and an experiential knowledge on how we do responsible AI some organizations some big organizations have a whole teams in the responsible AI space they’ve published publicly and likely have a lot of collateral internal to their organization that helped them internally or externally understand what it means to build. You know, whatever that product is in a responsible way.

We need to get to a place where we can come to a common framework about what that looks like collectively across organizations. And so from Mr. Strategic standpoint chai is fundamentally a convener. We are bringing together these organizations. We want to partner with these organizations to develop in a pre-competitive space this shared understanding and so it is critical that we engage in partnership with, you know, the largest organizations out there, but also importantly the smallest organizations right the startups.

It’s oftentimes. It’s fascinating, you know, some of these working groups that we’ve launched we have some of the very large organizations that have had for a number of years whole teams that have been in the responsible AI space and you know, we’re learning a fantastic a lot from them.

But that can’t be where we end right we have to include some of these startups and so equally important is the Strategic partnership with the investor community that fund and support from venture capital and private Equity, you know a whole portfolio of startups, right? We what we don’t want is we don’t want to create these these technical Frameworks that are so onerous and hard for an organization to implement that any startup would you know, throw up hands and say I give up I can’t possibly do this because I would need you know, 10 million dollars just start a responsible AI team like, you know that company over there that you know has a market cap of you know, 500 billion or something like that and so strategic Partnerships with with the private sector startup space is also equally as important as it is with the big text space in addition to that chai has we announced several important additional strategic Partnerships. They’re kind of go above and beyond one of them is importantly with the National Health Council in health.

I’m a physician and I would be remiss if I didn’t remind everyone that, you know, we are in this space you certainly Physicians are because of our patients. We want to ensure our patients have the best high quality care that we can give them and in the AI space that means making sure that they are at the center of everything that we’re doing in chai. And so National Health Council brings to Chai the kinds of diverse patient Representatives that will be serving in our working groups and every single one of them.

They’ll have leadership roles. Their voice will be importantly part of the Frameworks that are being developed. And so we announced a treat you partnership with them. We announced this strategic partnership with hl7 hl7 has been in the technical standard space for a long time since the 70s. And so we have a lot to learn from them. We don’t want to duplicate work that they’ve done we want to learn from them and accelerate the work that our community wants to do in the AI space together. And so working alongside these organizations that are doing, you know, good work already is really important if we want to make sure that we’re accelerating what we need to do and not duplicating and engaging the right stakeholders.

Sure all very important to bring in all stakeholders and have a big tent and indeed. I just want to get your quick comments and then we can wrap up Dr. Anderson last week. We saw some news from epic the big EHR provider that they released an API. Can you talk about that and give us your take on that how that kind of evoke and evokes responsible. It’s responsible AI theme.

Yeah. It’s a great so first it was a fantastic announcement very supportive of Epic and frankly any other EHR vendor or technology companies out there that want to support communities coming together and sharing software and open source code to do local validation.

If we are to scale and grow and ensure that AI is developed and validated and monitored on a local level. We fundamentally have to do that with software. We cannot do that in a bespoke way across thousands of different organizations that doesn’t involve a platform and a software-based approach. And so Epix beginning to support that kind of local validation and building of apis to enable that is is a fantastic first step. It is not sufficient and it is not enough for all of the use cases that are required that are needed when we talk about validation validation fundamentally needs to involve both local validation meaning I’m a health system. I’ve decided to deploy a model and I want to ensure that it’s being used and Performing well prospectively moving forward. So I’m going to monitor it I’m going to locally Val.

There is a whole nother set of use cases though where I am a health system. I don’t want to deploy that model yet. I haven’t yet made my procurement decision yet.

And I don’t have the resources to validate it. And so I need to rely on an external validator. It’s not the gold standard. The gold standard is always the local validation, but that’s not how we scale. We can’t we don’t do clinical trials on every individual in the United States to determine if a drug is safe and effective. We do it on a subset on a representative subset and so similarly in the external validation space we need to be able to rely on these specific kinds of assurance Labs that can do that kind of external validation that can create data sets that would be represented of that health system that you know doesn’t have the resources to build the local validation to be able to tell them or help them understand if that model would be safe and effective if deployed in that health system. So it’s a balance right? And so we need both the local validation that epic is supporting with the open source work in the apis that they announced last week, but we all need to support an effort that enables less resour.

Health systems or health systems that fundamentally want to have the external validation the data before they make the decision to deploy it locally to be able to have those results. And so it’s this nuanced space with external validation and local validation and we need to create room for both and we need to be able to support both but I commend epic for advancing advancing the issue and moving us forward in the local validation space

create and not all health systems are medical practices of course are on Epic. So perhaps their leading the way here and showing the interoperability and the allowing this open source API to allowing external vendors to build on top of their platform to give clinicians all sorts of different tools and great things that they’re used to say with the different nature. Yeah.

I mean, it’s a wonderful nature of the open source movement, right? It’s going to evolve over time. It’s going to have contributions from a variety of different people song and we’ll see where it goes. I’m pretty excited about it

in Korea. Thanks for your comment on that and thanks for so beautifully articulating chives balance here of kind of making sure that

AI flourishes in healthcare that we don’t stifle it with regulation, but we do it in a responsible way. Thank you so much Dr. Aniston for a great conversation. Hope we can do it again.

Thanks for having me here Mark. Just calling back. I’m happy to come back.

Okay, it’s a deal Health policy update with leche Abu Shack

just a few weeks after a House of Representatives committee approved legislation that would crack down on us companies doing business with certain Chinese biotechs. It appears a new dark money group has emerged in support of the move.

Stat reported Monday that the organization dubbed defend our DNA has launched social media ads that support the legislation which is known as the bio secure act a dark money group is an organization that is structured as a nonprofit but doesn’t disclose its donors defend. Our DNA’s ads call out Chinese company bgi group specifically and argue that it’s quotes helping the Chinese Communist Party Harvest genetic data of millions of people around the world. The social media ads appeared just a few weeks after the house committee on oversight and accountability voted in favor of passing the biosecurity group wuji biologics wuji apptech complete genomics and several other biotechs.

Its proponents argued the bill is a matter of National Security aiming to protect Americans health and genetic data from foreign adversaries Republican representative James Comer noted in a statement that the bill is quote unnecessary step towards protecting America’s sensitive Healthcare data from the Chinese Communist party before these companies become more embedded in the US economy, but wuji as well as the other companies targeted argue. The legislation is based on false allegations, and that they do not pose a national security threat to the US.

The BIOS secure act must now pass through the house in the Senate before being signed into law unless you Shack senior reporter at mmm.


and this is the part of the broadcast when we welcome Jack O’Brien to tell us what’s trending in healthcare this week. Hey Jack, hey there mark so all of our stories this week have some sort of tie-in with sports and we’re gonna start off with probably the most bizarre of all sports, which is competitive eating and I think our audience is plenty familiar with taquero Kobayashi who basically popularized Nathan’s Hot Dog Eating Contest earlier at the century with demolishing records. And then the Rivalry with Joey Chestnut, who as my producer and I were talking off camera is the one and only champion of hot dog eating contest, but he was the one that put on the map. I think that if anyone knows kobayashi’s story, they think of hot dog in contest well as part of a new Netflix documentary that came out just this week hack your health the secrets of your gut German doctor Gila Enders is working with her colleagues to understand and explain to the audience how gut health works and how it’s all tied to the broader physical health and as part of that spoke with Kobayashi who again, you know, you look back at any

Eating 50 hot dogs and 12 minutes eating hamburgers and pizzas all record amounts of ramen in a short period of time you think of him when you think of competitive eating and he’s giving it up as part of the documentary. He said that he no longer has any he doesn’t have that same desire for hunger in a sense of taste and it’s really really a kind of tragic story so that his body was broken and that he’s retiring from competitive eating due to these health issues less. I want to bring you in here because I have these long conversations about the effects of glp one drugs and this kind of renewed conversation around eating and eating to excess in he actually described himself as basically being raised Japanese but eating America and that his body is kind of betrayed him that sort of way. I wanted to get your take on on this kind of odd development.

Yeah, you know when I was kind of looking up some of the health impacts of competitive eating it’s interesting because they’re actually isn’t a lot of research on the topic because it’s really difficult to basically get a randomized control controlled trial in place where participants are told. You have to quadruple your stomach size, you know by eating obscene amounts of food. So it’s really difficult to to study the effects of it. So there’s not a ton of research on it. But the research that I did find online that’s been published in journals has shown that professional speed eaters have a higher chance of developing intractable nausea and vomiting which makes it hard to keep food down. They have a high-risk of developing obesity and profound gastroparesis. So there are some you know health effects that have been documented in some studies but really we don’t actually know what’s going on in the case of Kobayashi, but you know, it’s

Good on him for sort of recognizing his body’s signals, you know, as you said he doesn’t really feel hungry anymore. There’s some things that appear to be off with his gut health and he’s going to be trying to heal I guess and he’s sort of giving up this sport to adjust his diet and his lifestyle so good on him for for doing that. But you know, the gut health topic is something that is really interesting and needs a lot of more research for us to to better understand but I hope he heals and is able to as he hopes he said he wants to live a long and healthy life. So I hope he has that

absolutely and and Mark I’m not going to put you on the spot and see how many hot dogs that you would be able to eat in 10 minutes. I’m sure it’s going to be far less than the 60 the Kobayashi is done numerous times, but it does kind of speak to a number of the different topics that luscious covered before whether it’s got talk which is obviously very popular on tiktok. But also the soul idea of eating highly processed foods, which we continue to see more and more research coming out saying that that diminishes

People’s longevity also the the conversation around the hungry voice that people have that glp-1 drugs are seeking to curb basically and diminish that amount of hunger. But he did that by these years decades of competitive eating curious your thoughts on this side this whole situation. Yeah, I couldn’t help Jack but think about the comparison to the GOP ones which you know, mimic the gut hormone and makes a person feel satiated and perhaps his gut hormones or have been sort of acclimated because of his competitive eating toward, you know, doing doing a similar thing. You know, he doesn’t feel hungry doesn’t feel full feel bad for him. And then like you said, you know, he’s he said he’s Japanese but he’s been eating like an American and I also couldn’t help but think of the recent passing of Morgan Spurlock. Yeah, four days ago and the documentary filmmaker who of course sprang to fame back in 2004 with his film super-sized me which show was designed to illustrate the dangers of a fast food diet. He

Only McDonald’s for a month and he said he gained 25 pounds at the time. He saw spike in his cholesterol and lost his sex drive. Unfortunately. He just passed away of cancer. He was only 53 year older than me, but you know, it makes one appreciate their own mortality. But you know, he exposed the dangers of you know, fast food at that time put it on the map. He also did another documentary about I think the fast food chicken industry Super Size Me 2, holy chicken and as the AP points out after those films came out there was an explosion in restaurants dressing freshness artisanal methods Farm to Table goodness, and ethically sourced ingredients, but nutritionally not much has changed so

Just kind of raises more awareness of the need and you know, the Netflix documentary as well to pay more attention to what we put inside our bodies and hope to KIRO gets you know is able to get his body back to into a normal Rhythm and we all enjoy, you know, being hungry and being full it’s one of the great pleasures of life. So all the best. Yeah, absolutely. We’ll be curious what that that second act looks like and I know that he’s trying to make a healthier version of a hot dog don’t really know how possible that is, but we’ll see how that all turns out for our Second Story this one we won’t talk about a ton but it’s it’s interesting just because of the names involved and the frequency with which this happens Mike Tyson’s recovering from a health scare that he had on a cross country flight last week where he became ill ultimately it was determined that he had an ulcer flare up. This comes weeks before he is slated to fight against social media and amateur boxer social media influencer and amateur boxer Jake Paul, which will be broadcast live on Netflix Tyson in a statement.

The family had said that he become nauseous and dizzy doodle ulcer flare up 30 minutes before landing. He’s appreciative to the medical staff that we’re there to help them. And now he’s feeling great obviously ulcer flare-ups are nothing that’s particularly new and estimated one in 10 people worldwide have peptic ulcers at some point in their lives, but I think it’s important in the context of everything that’s going on with Mike Tyson. He is a 57 year old man fighting somebody who is about half his age and you know, whether this is just some sort of drama to drum up before the fight to get more people interested in to tune in it has raised the profile of ulcers and everything that goes along with them too, which I think of people like a lot of people Overlook, but it is something that happens not an everyday life, but it is more common than I think people would like to acknowledge.

It’s a little frightening. I guess that he has this upcoming fight schedule. That was just reading a little bit about Jake Paul and I guess he was sort of like a viral YouTuber am I correct and he just decided to start boxing.


But yeah, he’s like in his 20s and Mike Tyson is nearing 60, you know, they’ve been kind of like warring back and forth and seeing they’re both gonna bring their all to the game and and the fight so it’ll be interesting to see that play out. You know, I hope that Mike Tyson’s health issues any potential ulcers are cleared before then, you know, it’s always concerning to see someone who’s that much older competing with a significantly younger opponent, but I wish him the best it’ll be it’ll be fun to see what happens.

I’m really curious Mark how many pitches were going to get in the coming weeks where it’s like, oh talk to our expert about ulcers or this this campaign that’s coming out to raise awareness about ulcers based on this because you type in ulcers right now Google news and everything’s just

It’s interesting that Mike, you know, you know, we grew up with him being, you know, the ruthless most vicious champion heavyweight boxer ever and you know, he was, you know filled by a stomach ulcer. So he’s got to keep up this Persona of being you know, this ruthless, you know a guy with the fist cuffs and yeah, he’s shown his vulnerable side here, but we wish him the best and hey, it’s his own in the middle way. He’s drawn more attention and more awareness to stomach health. So that’s a good thing. Yeah. Thanks similarly to the Kobayashi thing. Another time is undefeated in that regard. So whether it’s competitive eating or boxing or singing on an airplane while an ulcer flares up, it’s going to get you at some point not to end on a dower note. But this is the last episode that we’re recording during mental health awareness month and unfortunately from the Sports World. There was a very prominent death by Suicide that happened over the weekend golfer Grayson Murray who was a two-time PGA Tour.

Our winner died on Saturday and his family confirmed that he had died by Suicide following years of alcoholism and mental health issues. He had been diagnosed with social anxiety and I just wanted to be able to talk about that on the show here because we’ve obviously covered throughout the month and you know, it’s not even just during may we always get pitched a lot of different activations around mental health awareness. There’s this conversation of you know, if you see something say something, you know being able to express your vulnerability seek treatment try and get that mental resilience up and it was just striking to see so many people coming out and kind of resonating with this story. He was only 30 years old, you know, it’s only a year older than I am and to your point mark nothing really puts your mortality into perspective and seeing somebody that’s around your age dying like that. But there were a lot of well wishes both from the PGA Tour and members on there but also mental health Advocates basically saying, you know, this is another instance of the Mental Health crisis that I know you left you have covered extensively that were still facing this country.

And people kind of having a lack of available options.

Yeah. No, I think there’s been a lot of progress especially since the pandemic when it comes to mental health awareness and you know people being open about mental health, you know, anxiety and depression but I think this is just sort of an example of that. There’s still a long ways to go. There’s still people who are struggling who you would maybe have no idea would be you know, because you’d take a look at you know, Grace and Murray and you know, he was just posting on his Instagram, you know, he’s been attending tournaments and and winning and achieving things in his life. So it kind of comes as an unexpected shock to many people as you mentioned Jack he did struggle with alcoholism and had been opened in the past about his depression struggles. I read somewhere one of his quotes where he basically said, he constantly felt like a failure. So that’s a really Stark difference from like the reality of his life where he was achieving a lot and was talented athlete and you know, it just brings even

More awareness to to this issue shows that really anyone can suffer from anxiety and depression suicidal ideation. Even you know, talented and successful athletes. So, you know, it’s it’s a it’s a tough one, but I hope that it helps sort of brings some more Spotlight to the issue for others who might be struggling.

Yeah, it’s such a cliche thing that comes out anytime that a person of prominence dies by Suicide. I remember when Anthony Bourdain and Kate Spade died within a week of each other back in 2018. And there was this whole conversation. Oh, well, you’re a you’re a popular celebrity or you’re a fashion designer a successful golfer restaurant tour. What issues? Could you have and there’s the kind of this conjecture of oh, well what really could you and and you saw in what he said to the reporters back in January just a few months ago saying that he felt like a failure and he was drinking during the tournament weeks and had all these issues and I know plenty people my wife that if they were golfing like that they would think like, oh, this is the height of whatever but there’s always there’s always something more that people deal with when it comes to social anxiety and stuff like that and you know, even for trying to get all that treatment it was stuff that ultimately wasn’t enough and that’s that’s kind of the tragedy and all of this though. It does raise further awareness that we need to do more in the mental health front.

Yeah reminds me a little bit. It’s like kind of a cross between Matthew Perry’s death, you know with addiction for many years and his death is still kind of under investigation as well as

Um, you know here he was, you know, Grayson was only 30 years old, you know, and he said he struggled with alcoholism despite his success. I remember interviewing Allison Schmitt, you know, the Olympia Olympic swimmer telling me that she and she this was reported in ESPN magazine long before I interviewed her, but she was you know talked about a car ride back from a competitive meet, you know, and she considered driving off the road, you know, just taking her own life and she would wake up in her bed and say, you know the world wouldn’t miss me if I was if I was gone, you know talking about the unique strain that Olympians Olympians are under and I I’m sure you know, he was a golfer there’s a certain stress and strain there as well. Maybe he felt the pressure of always winning these tournaments and maybe a little you know, how long can I keep it up for and he had the alcoholism which is not easy to maintain a habit like that and still perform at the top of your game. Literally, so must have been under tremendous strain.

So again, I like like you like you said Jack you never know. What’s bubbling Brewing beneath the surface and it’s good to check in on people on my relationship, you know also stressing that that the need to really when you when someone says hey how you doing, you know that to really mean it, you know back to them and to really find out besides, you know, we’re talking about mental health and the Brain body connection, but the importance of keeping up with with friends and finding out how they’re doing and feeling and and speaking up it’s okay to not be okay and just one more thing to note before we wrap the show here. Is that the Korn Ferry tour, which is the level below the PGA Tour and the PGA Tour so they will have grief counselors on hand for the events this upcoming weekend to see deal with the any sort of feeling and emotions that people have associated with this and you’ll see in our transcript for this podcast and we have it for other podcasts where we’ve talked about the topic of suicide or suicidal ideation. We include a link and phone number for the National Suicide Hotline as well.

So that’s always that just comes to our own reporting as journalists and kind of the ethics that go along with being able to touch on this topic, but I appreciate our audience being able to go through us with that. You know, it’s not an easy subject to end the show with but we do have an exciting episode next week kind of a hard pivot here Mark what’s on the agenda for next week? Thanks for joining us on this week’s episode of the MM+M Podcast. Be sure to listen to next week’s episode when we’ll be joined by our very own Larry Dobrow to preview the 2024 MM+M Agency 100!