Research & Content Writing With Claude
Looking to augment your content creating and feeling stuck? Here's a step-by-step with Claude.
I partnered with the team at the AI-Powered Women Conference to create a primer course for attendees so we get everyone attending a baseline of AI knowledge, as well as some differentiator tips and tricks. It was a great opportunity to see just how far we’ve come with AI augmenting content creation. I shelled out the $20 for a Claude pro plan and went down the rabbit hole.
First pass, I tried things out to see if it was worth investing the time. I uploaded some articles I’d selected for the course material references, shared the module outline, and told Claude to analyze the content against the outline but not to start writing anything. With its analysis ready, I then asked Claude to find some additional resources with explanations of why they were a good fit. I also gave examples of what I considered a “vetted source”.
I ended up adding 3 out of 8 of the articles to my reference materials. Considering the hours I would have spent searching the different sources and skimming articles and research, this was a big win.
I had it analyze the original articles before giving it a search request because GPTs can get a little lost when given multiple commands at once. If you give it steps, guiding through a process, there’s a higher likelihood of getting quality results. Plus, it had the added benefit of highlighting some themes and insights from the articles I’d selected that I could compare against.
Step 1: References + topic outline + analyze
Step 2: Research request + build on from step 1 + source examples
Step 3: Add specified articles to the original analysis
Now for some slightly creepy fun. Dr. Felicia Newhouse is the founder of the conference, so I had Claude review her writing and speaking to capture her style for course content writing. I gave the context of the conference so it would have guidance on who she is (in case anyone else has the same name). It combed the web and found different references (which it listed) and then wrote up a comprehensive style guide to her writing and (unasked) how that could be applied to the module. Cool & creepy.
Summary of Claude’s observations:
I then reshared the outline (slight grounding to make the task at hand clear), included some content guidelines with a link to the conference, and prompted Claude to write a first draft.
Draft 1 was comprehensive and it supplied a summary of what was included as well. Next was applying my favorite new prompt
method, I asked it to critically review the work.
It supplied a thorough review of what worked, what didn’t, and what was missing. My favorite part was that Claude graded its work as part of the summary.
I asked it to review based on the evaluation and recommendations it provided (nothing in that batch seemed off topic). Once it was done with the new draft, I pushed it more for critical review of the content. Really focusing on the value for the recipient this time.
I really enjoyed this output because it evaluated its own work while simultaneously patting itself on the back for improving. It even informed me that only one more revision would be required. Once again, the robots are getting uppity.
I had it revise once again and this time, to really drive home its point that no further reviews were needed, it unprompted evaluated the draft and supplied a grade.
Overall, it was a solid start and I felt confident enough to include it in my workflow for creating the content for the five modules. However, this was a lighter ask than the actual content creation would be since I only included a few references and didn’t spend a lot of time digging into the content because I knew it was a test.
More to come on the actual content creation experience and where Chad, I mean Claude, left some things to be desired.










