Automate case study creation with AI part 2
Let's continue our journey into AI automation with Make.com : repurpose an article into social posts.
Hello, thank you for reading The Product Courier.
We’re now 2,550 in the community, thanks. 🙏🏻
In this 45th edition, I want to share with you:
🗞️ The FOMO on LLMs:
New models out with Claude 3.7, Grok 3, ChatGPT 4.5…and that’s just this week…
🧰 Part 2 on “how to create case studies on autopilot with AI + automation”
Building AI automation made simple.
If you're new here, welcome!
I’m Toni, I help B2B companies build products people want to buy with AI-powered GTM and product tactics.
With Lucas, we share actionable AI tips to help you deliver better work and accelerate your career.
Happy reading!
News of the week. 📰
The FOMO on LLM 📌
“Claude 3.7 is a game changer.”
“Grok 3 resets the AI race and beats OpenAI.”
“QwQ Max challenges OpenAI & Deepseek”
“ChatGPT 4.5 is more human-like than ever.”
These are quotes about new LLMs posted in the last…72 hours…
New AI models drop every week, particularly now, with every AI provider wanting to show their “reasoning” or “deep thinking” model.
This generates a huge FOMO (fear of missing out) among founders and their teams:
Should we switch? Maybe cancel our OpenAI team account?
Do we have to start over with a new model or reasoning method?
What will we do with our existing AI agents?
The pace is exhausting, and it can be overwhelming.
And we keep reading posts by “AI Guru” with clickbaits like:
“I tried [New Model], so I canceled [ChatGPT].”
Just like you said you would 2 weeks ago about another model.
Why it matters
The pressure is real. No one wants to miss out on something better, faster, or cheaper. But constantly chasing the latest release creates more problems than it solves:
Technical debt from endless migrations.
Compliance risks from unvetted providers.
Wasted effort on models instead of real outcomes.
Instead of following a clear AI strategy, teams react to every new launch—hoping that the next big thing will magically fix their problems.
Spoiler: It won’t.
If you are not a scientist, a researcher, or a top developer, chances are you won’t even notice much of a difference in the results you get from the different new models.
What we can learn from this.
The real question isn’t which model to use—it’s why you need one in the first place.
No single model is a perfect fit for every use case.
The key is making AI decisions based on your actual needs, not hype.
Instead of chasing every new model, focus on fundamentals:
Prioritize your use case.
Are you building a chatbot?
Fine-tuning for a niche industry?
Automating workflows?
Choose models that serve your specific goals.
Diversify providers.
Don’t rely on a single model.
Weigh OpenAI, Anthropic, Mistral, or open-source options based on cost, latency, and reliability.
Think about compliance early.
Some models store query data, others don’t.
Know your legal and security requirements before committing.
Optimize for business impact, not benchmarks.
The best model isn’t the one with the highest leaderboard score.
It’s the one that delivers value.
For example:
I like using Claude for coding and feature artifacts.
but I still prefer ChatGPT for content creation because I leveraged its memory and projects with tailored prompts that work for me.
I like using Perplexity for day-to-day search.
but I prefer to use Grok’s Deepsearch for detailed insights.
It is good to stay updated on the latest releases and advancements.
And test and iterate on new models.
AI is a tool, not a goal. Skip the hype. Focus on what matters.
Let AI work for you, not the other way around.
You want to master AI but are not sure of the Return on Investment?
Check out our free AI ROI Calculator:
Run a personalized estimate of the time your team can save using AI.
Get access to tailored AI training sessions to learn to work smarter. ✨
Your weekly tutorial. 🧰
Automate case study creation with AI Part 2 🤖
“Automation applied to an efficient operation will magnify the efficiency. […]
Automation applied to an inefficient operation will magnify the inefficiency.”
Bill Gates
✅ AI tools save time, but manual tasks slow you down.
✅ Copy-pasting between tools and AI prompts ends up taking lots of time.
✅ We have seen how to use AI+automation to streamline content creation.
✅ We saw how to create an article; now we’ll see how to repurpose the content.
Previously on The Product Courier show…
Let’s continue our journey into the Agentic world.
We started with Zapier AI Automation.
Last week, I introduced AI automation with Make.com to create case studies for marketing.
Customer Support, Product, or Sales tag an interesting call with a customer about their challenges and share relevant information about your solution.
You add the transcript in Notion.
The call transcript is read and turned into a case study article with the challenges and solutions.
The article is saved in Notion automatically.
Post drafts are generated for Linkedin, X...
You receive a Slack notification to review the content.
We saw how to create the article in Notion automatically.
Today, we’ll continue the workflow to send Slack notifications and generate social posts.
This is what the output looks like
Let’s cut to the chase. Take a breath, and let’s proceed step by step.
Module 7 - Router
We continue where we left off after Module 6 - Notion update, with our case study article available saved in a notion Database.
We want to use the case study article we created and repurpose it for LinkedIn and Twitter/X. We’ll notify our team on Slack with a summary of the article.
The challenge is that if we use ChatGPT with one prompt for all these tasks, it will be very long with different tasks to perform.
And chances are, AI will make mistakes and/or provide incomplete outputs.
To avoid that, we want to use 1 chatGPT prompt for each task (1 for the LinkedIn post, one for the Tweet, and one for the Slack summary).
We’ll use a “router” module in Make. This module distributes the output from Notion (our article) to three different paths.
Click on the “+” button next to the last Notion Module
Search for “router” and add it.
You can then click “+” and connect many other modules to it.
Module 8a - ChatGPT for LinkedIn
As we did to turn the call transcript into a case study, we’ll ask ChatGPT to repurpose the article we crafted into a LinkedIn post and a Tweet.
We want to keep the essence of the article and:
Add a catchy Hook,
use a proven LinkedIn framework, like PAS (Problem-Agitate-Solution),
share the key elements of the case study,
End with an actionable CTA
How to set up the module:
Add a ChatGPT module “Create completion.” to the router
Fill the System prompt message:
Act as a seasoned LinkedIn copywriter with 10+ years of experience crafting high-performing, emotionally engaging posts for [INDUSTRY].
You are an expert in storytelling, persuasive writing frameworks, and content marketing tailored to a professional B2B audienceFill the User prompt:
Adapt this prompt to your style and copywriting guidelines: