Four Questions to Ask Before Using AI-Generated Content in Health Care

March 13, 2023

In-house marketers and agencies alike grapple with how and whether AI should become just another part of our marketing toolkit. What are the benefits? What are the pitfalls?

// By Rachael Sauceman//

Rachael Sauceman is director of strategy for Full MediaYou get into work, grab your coffee, boot up your computer, and look at your to-do list for the day. The one and only thing you’ve left for yourself on the to-do list is the thing you’ve been putting off for days — weeks even: Write that blog on the dangers of high blood pressure.

Now instead, imagine that your process looks like this:

You open a browser and go to your favorite AI platform, like the buzzy ChatGPT, which has recently become a media sensation, leading to a multibillion-dollar investment from Microsoft. You type in a series of questions for ChatGPT to answer:

  • What is high blood pressure?
  • How do I know if I have high blood pressure?
  • Why is high blood pressure dangerous?
  • How do I treat high blood pressure?

The AI writes back cogent answers that follow widely held understandings of high blood pressure, a common condition. In mere minutes, you have a 1,000-word blog. Maybe you even go the extra mile: You ask the AI to rewrite the content in layperson’s terms to bring the reading age down a little, making sure it’s fully accessible.

But can AI really replace the style, knowledge, and judgment of a human writer? Or understand the nuances of language? Does it have to be either/or, or are tools like ChatGPT another technology tool that smart marketers can use to their advantage? What guardrails should we all put around AI to use it responsibly?

Below are four considerations for all health care content — AI-generated or not — that may help you set your guardrails.


This content is only available to members.

Please log in.

Not a member yet?

Start a free 7-day trial membership to get instant access.

Log in below to access this content: