You know those glossy brochures in a doctor’s waiting room?

You pick one up, read a page or two, and put it back. It looks nice and reads well, but none of it actually applies to your situation. You’re left thinking: “Okay, but what do I do with this?”

That’s what many AI answers feel like to me. One vague question in, one surface-level answer out. Polished. Reasonable. About as helpful as the brochure.

There’s a big difference between asking “What should I make for dinner?” and saying “I have chicken thighs, 30 minutes, and a kid who won’t eat anything green.”

Same tool. Different result. The difference is constraints.

Most of the prompt engineering I end up doing is not about finding magic keywords. It’s just clear thinking written down.