Submissions | VizChitra 2026
The Curator's Dilemma: When Making Is Easy, Choosing Is the Craft
Anand
LLM Psychologist•Straive
Description
My team creates 9 variations of a chart in 15 minutes -- all defensible, none wrong. My afternoon is spent staring at them, trying to figure out which one is actually good.
After decades of building data viz - Excel, PowerPoint, D3 - the making part is easy, and FAST. What's slow is the choosing.
Everyone has this problem. Vibe coding is a default. The bottleneck has moved from "can I build this?" to "should I ship this, or that one, or the third version, or none of them?" From construction to curation.
And most of us haven't developed the muscles for that yet.
This Dialogue is about that shift: the new, harder skill of selecting, evaluating, and refining from abundance rather than building from scratch.
1. Our central question:
AI tools generate multiple "correct" visualizations in minutes. What does good judgment look like - and how do we develop it?
What are the criteria, instincts, and craft knowledge that help us choose well from a sea of plausible options?
Is "choosing" even the right way to think about it, or is the real skill knowing how to steer the generation in the first place?
2. Who we want in the room:
Anyone who picks from multiple versions of a chart. Especially if AI has intruded into your viz workflow (whether enthusiastically or reluctantly).
Data journalists, dashboard designers, analysts, design students, and independent practitioners - a mix of experience makes this richer: seniors bring taste they've built over years; juniors bring fresh eyes and the honesty of not yet having defaults.
3. The shape of the dialogue:
- The Exercise (10 mins): You'll see five visualizations (AI/human-made, unlabeled) of one dataset on a wall or screen. Rank them on effectiveness with colored dot stickers. We'll have an instant, visible map of the room's collective judgment before anyone speaks.
- The Discussion (15 mins): Groups of 4-5 compare rankings and articulate the why. We'll use prompt cards, e.g. "What did you notice first?" / "Which one would you trust in a news article?" / "Which one would you be proud to have made?" / "What's missing from the one you ranked lowest?" The goal isn't consensus - it's surfacing the criteria people use, often unconsciously.
- The Reveal (10 mins): I'll reveal which were AI-generated. Revisit rankings. Did knowing the origin change anything? Did any AI-generated chart rank high? Did any human-made chart rank low? This is where the conversation gets honest - about bias, what "craft" means, and whether origin even matters if the result is effective.
- The Rubric (10 mins): Each group contributes 2-3 criteria they used (or wish they'd used) for evaluation. I'll pin these on a shared wall and cluster them. The room works together to draft a rough "curator's checklist" - not a rigid scoring system, but questions we can ask when choosing among multiple visualization options.
4. Takeaways:
- A "curator's checklist" - 8-10 practitioner questions for choosing between visualizations (human/AI).
- An awareness of how we evaluate, what we value, and blindspots. And how others see this differently.
Related Links
Materials Required
- Five printed (or projected) visualizations of the same dataset — a mix of AI-generated and human-made, prepared in advance by the facilitator
- Colored dot stickers (two colors — one for "most effective," one for "least effective") for the opening ranking exercise
- Prompt cards for small group discussions (printed, one set per group)
- A large wall or board with sticky notes and markers for the shared rubric-building exercise
- A one-page handout summarizing the collective rubric, to be compiled and shared with participants after the session
Room Setup
Visualizations displayed on a wall or large screen visible to all. Chairs arranged in clusters of 4–5 for small group discussion, with enough space for people to stand and move to the wall during the opening ranking and closing rubric exercises. A second wall or board space reserved for the rubric-building activity.