Submissions | VizChitra 2026

Designing Human Control in AI Tools: A Three-Tier Framework

Kuldeep

StudentNID Bengaluru

Under Review · Talks · Visualizations and Tools

Description

As AI becomes embedded in dashboards, analysis tools, and design workflows, many interfaces collapse into a single pattern: chat. Designers are often told that if AI outputs are confusing or wrong, the solution is simply to “prompt better.” This framing hides a deeper issue: we are mixing together fundamentally different kinds of systems and asking them to behave the same way.

This talk introduces a simple three-tier framework to help designers reason about how AI should be used in data and visualization tools, based on the nature of the work being done rather than the capabilities being marketed.

The core distinction the talk makes is between deterministic systems and probabilistic systems.

Deterministic systems—such as spreadsheets, databases, and rule-based pipelines—produce the same output for the same input and are therefore trusted with execution and verification.

Probabilistic systems—such as generative AI—produce variable outputs by design. This variability is what makes them powerful for exploration, but also unreliable for truth and action.

The problem arises when probabilistic systems are presented through interfaces that make guessing and execution look the same. In data tools, this can lead to hallucinated insights, over-trust in AI-generated outputs, and dashboards that feel “uncanny” or unreliable.

To address this, I propose a three-tier framework:

  • Creator systems, where AI is used for exploration and ideation.
  • Guardian systems, where AI must be constrained to deterministic, verifiable sources.
  • Collaborator systems, where probabilistic intent is combined with deterministic execution through dynamic, generative interfaces.

Rather than treating these tiers as rigid categories, the framework presents them as normative design goals. Real-world tools are often hybrids, and the designer’s role is to decide which tier should dominate in a given context and how transitions between them are made visible to users.

The talk is grounded in examples from AI-augmented data tools and visualization workflows, showing why many current systems feel unreliable and how alternative interaction patterns can preserve human control.

The presentation is structured as: A clear explanation of probabilistic vs. deterministic systems Common failure modes in AI-powered data tools The three-tier framework and its rationale Practical implications for designers building or evaluating AI-enabled visualization tools

This topic matters to me as a designer because I see AI increasingly framed as an autonomous decision-maker, when in practice it requires careful interface design to remain useful and trustworthy. The framework offers designers a shared language to push back against vague “AI-first” decisions and to design tools that support exploration without sacrificing reliability. Key takeaways include a reusable mental model for AI in data tools, clearer criteria for when AI should generate versus verify, and a reframing of the designer’s role as an architect of human control rather than a prompt engineer.

Note: This talk is based on a working research paper, currently under review / pending acceptance at CHI SRC 2026.

Related Links

VizChitra instagram linkVizChitra twitter linkVizChitra linkedin linkVizChitra bluesky linkVizChitra youtube linkVizChitra github link

Copyright © 2026 VizChitra. All rights reserved.