Submissions | VizChitra 2026
Compiling Business Intent into Visual Data Systems
Varadharajan
Head of Engineering•bighammer.ai
Description
For decades, data visualization has relied on a fragile upstream process: business intent is translated into pipelines, metrics are engineered manually, and dashboards are built on top of transformations that may drift, break, or silently encode assumptions.
This talk introduces the concept of an AI-Native Data Compiler — an architectural layer that compiles business intent directly into validated, testable, and reproducible data pipelines that power visualization systems.
Rather than focusing on “AI that generates SQL,” this talk reframes the problem: visualization quality depends on how faithfully business meaning is translated into executable data structures. Today, that translation is manual, iterative, and error-prone.
Drawing from my experience building AI-driven data platforms and schema-driven data exploration systems, I will explore:
Why current chat-to-SQL and copilot approaches are insufficient for reliable visual systems
How intent modeling differs from code generation
The structured reasoning stages of an AI-native compiler:
- Intent modeling
- Semantic schema mapping
- Transformation synthesis
- Validation and edge-case generation
- Deterministic pipeline emission
How compiled transformations can produce visualization-ready, self-describing data contracts
A key idea in this talk is that visualization systems should not depend on fragile, undocumented pipelines. Instead, they should consume compiled, validated transformation artifacts that embed rounding rules, null strategies, aggregation logic, and reproducibility constraints.
The structure of the talk:
- The hidden fragility behind modern dashboards
- Why visualization accuracy is a compilation problem
- The AI-Native Data Compiler architecture
- Schema-driven data exploration and visualization generation
- Implications for the future of autonomous visual analytics
This topic matters to me because I have spent years building data infrastructure where visualization correctness depends on subtle transformation logic. Repeatedly, I’ve observed that visualization errors are rarely visual — they originate upstream in translation failures.
For the data visualization community, this proposal connects directly to ongoing conversations around trust, reproducibility, explainability, and human-AI collaboration in visual systems.
The intended audience includes visualization practitioners, data engineers, analytics leads, and researchers exploring AI-assisted visual analytics.
Key takeaways:
- Visualization systems are only as reliable as their transformation semantics
- AI must move from assistive code generation to structured compilation
- Schema-aware reasoning enables reproducible visual data contracts
- The future of data visualization is tightly coupled with AI-native data infrastructure