A thread by @ChrisLaubAI compiled the most shared Claude prompts in research communities . I tested them. These two are absolute gold.
1. The Contradictions Finder
"List all internal contradictions, unresolved tensions, or claims that don't fully follow from the evidence in this draft."
It catches the logical jumps your brain autocorrects but a reviewer won't .
2. The Assumption Stress Test
"List every assumption this argument relies on. Now tell me which ones are most fragile and why."
This one hurt. I realized my entire results section assumed my sample was representative. It wasn't .
3. The "What Would Break This?"
"Describe realistic failure modes for this approach. Not edge cases."
Great for forecasting sections .
Caveat: Don't paste proprietary data into public LLMs. Use institutional licenses or local models. Protect your IP.
1. The Contradictions Finder
"List all internal contradictions, unresolved tensions, or claims that don't fully follow from the evidence in this draft."
It catches the logical jumps your brain autocorrects but a reviewer won't .
2. The Assumption Stress Test
"List every assumption this argument relies on. Now tell me which ones are most fragile and why."
This one hurt. I realized my entire results section assumed my sample was representative. It wasn't .
3. The "What Would Break This?"
"Describe realistic failure modes for this approach. Not edge cases."
Great for forecasting sections .
Caveat: Don't paste proprietary data into public LLMs. Use institutional licenses or local models. Protect your IP.