Welcome to our
content laboratory.

Our recent work includes an experiment with rogue AI agents, an analysis of Reddit posts about AI in schools, research on 11,000+ AI papers, and even 100% human-written content.

We love putting theory into practice—and then writing about the results. These examples show how we're applying AI to build original content, from in-depth analyses to AI-powered tools.

01

We built a team of AI agents and then fired them

We experimented with autonomous AI agents to automate collaborative copywriting, assigning roles like editor and fact-checker. Our experiment revealed that while fact-checking agents effectively identified hallucinations, agents often got stuck in endless revision loops, ignored instructions, or produced generic content. The conclusion: AI agents are more like enthusiastic interns who sometimes produce brilliant work and sometimes go completely off-script, requiring constant human supervision.

Read the Article
02

We analyzed 415 Reddit posts about AI in schools

We examined online discussions about artificial intelligence in educational settings, analyzing emotionally significant keywords to gauge sentiment. Our analysis found that 76% of attitudes were negative or uncertain, with primary concerns around academic integrity, educator uncertainty about policies, and the risks of using AI to grade student work. The conclusion: schools are conducting largely unmonitored experiments without proven learning outcomes.

Read the Analysis
03

We collected abstracts from 11,303 recent research papers on AI/ML

We built a prototype tool called "Future Scan" that uses semantic search and machine learning to identify trends in AI/ML research from arXiv. After indexing over 11,000 papers, we discovered that 30% focus on evaluation and benchmarking, with transformers, diffusion models, and efficiency/compression emerging as trending topics. The tool addresses the difficulty of manually finding relevant papers and spotting emerging patterns in the fast-moving field.

Read the Analysis
04

AI-powered fact-checking tool

After working with AI-written drafts that were full of incorrect citations and unsubstantiated claims, we decided to fight fire with fire by building an AI-powered fact-checking tool. Writers upload an MS Word-formatted draft and get back a detailed report that includes the status of all embedded links, a list of uncertain and unsupported claims, and an overall credibility score.

Try the Tool
05

How we edit AI-generated copy and why it takes so long

Using AI to write copy shifts work from drafting to editing, but doesn't eliminate effort entirely. We developed the SHARP framework: Spot hallucinations, Hone voice, Add insights, Read aloud, and Perform final review. The article identifies common AI writing patterns like excessive em-dashes and vague phrases ("dive deep," "delve"), and offers a systematic approach to ensure AI-assisted content remains authentic and accurate.

View the Framework
Based on interviews with real people

100% human-written content

Prior to launching Good Content, founder Karen Spinner was Director of Content Services at The Argonaut, a full-service creative agency that closed its doors in April 2025. At The Argonaut, Karen personally created strategic, fully human-authored content for leading technology brands including Adobe, Confluent, HP Enterprise, and more.

Are you ready to experiment?

We're eager to collaborate on your next content challenge.

Get in Touch