Popular searches
//

MotherDuck Dives: From Natural Language to Live Dashboards

9.3.2026 | 8 minutes reading time

Dives are interactive visualizations created through natural language, built directly on top of data in MotherDuck. Users describe what they want to see, and an AI agent generates a persistent, interactive component that lives in their workspace alongside SQL notebooks. MotherDuck positions Dives as a "bridge between one-off questions and always-up-to-date dashboards". Instead of building a full dashboard or writing visualization code, users ask a question and save the answer as a Dive that stays current with the underlying data.

What Sets Dives Apart

Traditional dashboards require clicking through UIs or writing visualization code. Dives replace that with plain English. They query live data on every load, eliminating manual refreshes and stale snapshots. Integrated directly into the MotherDuck workspace, they sit alongside SQL notebooks and offer instant exploration. Users can filter, drill down, and interact without waiting for queries to re-run. Unlike one-off AI-generated charts, Dives are persistent, shareable, and interactive rather than static images. Here also lies the reason why MotherDuck calls this new product Dives rather than simply Dashboards, as Dives can go beyond them, for example by modifying data or interacting with services.

How Dives Work

Getting started requires a MotherDuck account with at least one database and an AI client connected to the MotherDuck MCP Server. Dives are available on Business plans at no additional charge. Under the hood, Dives leverage MotherDuck's multi-tenancy architecture to serve sub-second queries. Every user works with dedicated compute, so there is no slowdown when multiple users explore data concurrently. Dives are React components rendered within the MotherDuck UI, but they can not – at the moment – be embedded, because of specific configurations with the headers and permissions to run the Dives in an iframe. The creation of a Dive is essentially an SQL function call. These calls are executed server-side on MotherDuck and are hence not available on local-only DuckDB connections. This means that users do not have to rely on an AI agent to create a Dive but can instead manually create, read, update and delete Dives directly from SQL. This could be leveraged to call a Dive function at the end of a data pipeline to automatically create or update a dashboard. The main application of Dive at the moment is of course still their creation through AI agents which is why we will focus on that use case in this article.

Collaboration and Governance

When a Dive is saved, the AI agent checks whether the databases it queries are shared with the organization. If not, it suggests sharing them so the team can view the Dive. Users can also explicitly ask the agent to share. This creates org-scoped shares for any private databases referenced in the Dive's queries and updates the Dive to use the shared references.

An important caveat: Dives currently do not support role-level security, and this is not on the near-term roadmap. If a user has access to a database, they can see its entire contents. MotherDuck's governance model is simplified compared to enterprise solutions, a consequence of being built on DuckDB. MotherDuck customers building customer-facing applications typically provision an individual database per user, leveraging MotherDuck's multi-tenancy to achieve data isolation.

Workflow

The key is to ask for a "Dive" explicitly. This signals the agent to persist the visualization in MotherDuck rather than producing a one-off chart. A prompt like "Create a Dive showing monthly revenue trends for the last 12 months" is all it takes; the agent handles SQL, chart configuration, styling, and saving.

Dives appear in two places within the MotherDuck UI: recent Dives show up in the left sidebar below notebooks, and a complete list is available under Settings → Dives. Once created, users can refine a Dive through follow-up prompts. Each update modifies the Dive in place, keeping it current.

Users should be specific about the visualization they want, including details about chart type, time ranges, and groupings. Providing table and column names also helps, the less guessing the AI has to do, the better the output. Finally, it pays to iterate incrementally: start with a basic visualization and layer on complexity through follow-up prompts.

Hands-On: Weather and Cycling in Munich

In our three-part Talk To Your Data series, we explored MotherDuck's MCP capabilities with Munich weather and cycling sensor data (2015–2025). For this test, we used the same dataset to see what Dives would produce. To keep the context manageable, we scoped our prompts to the year 2025.

Approach 1: The One-Shot Prompt

We started bold, deliberately ignoring the best practices above. With Claude Code connected to MotherDuck via MCP, we went straight for a single comprehensive prompt: "Create a Dive that visualizes the relationships between daily bike traffic and weather conditions throughout the year 2025 and identify tipping points with the mcp_playground data."

In comparison to the “normal” Claude, Claude Code creates a local preview before uploading the Dive. Only after explicitly asking it to save made the Dive appear in MotherDuck. Additionally, having to jump between the terminal and the browser feels rather unintuitive, especially since the MotherDuck workspace already supports both SQL and click-based interaction. We are not alone in this observation. In the official MotherDuck Slack channel, MotherDuck's Garrett O'Brien noted: "nothing official to share yet but I have heard three separate requests for this in the past ~hour, so it is top of mind." MotherDuck's rationale for the current approach is that they want to deliver data to users in their existing workspace (ChatGPT, Claude, etc.) rather than redirecting them to a separate UI.

After roughly two minutes – most of which was spent on the local preview – the Dive appeared in our workspace. The first visualization is a scatter plot of daily temperature vs. total bike rides, split by dry (blue) and rain/snow (red) days. Rainy days consistently show lower ridership at the same temperature, with detailed tooltips on every point.

temp_vs_rides.png

The tipping points tab contains a bar chart grouping days into 2°C temperature bands. The steepest acceleration in ridership occurs in the 0–15°C range, where traffic roughly doubles. A reference line marks the single largest jump.

tipping_points.png

The last graphic is a monthly overview, a dual-axis line chart showing average daily rides and average temperature tracking closely through the year.

monthly_overview.png

Approach 2: Iterative Refinement

For the second run, we took the opposite approach and started by asking for a "first general Dive about the data" for 2025:

general_dive.png

We then asked Claude to focus on weather conditions and bicycle traffic. The refined Dive already showed a more nuanced picture: the scatter plot now distinguished between dry, rain, sleet, and snow, compared to the coarser dry vs. rain/snow categorization from Approach 1.

general_to_weather.png

Notably, the overall distributions in both scatter plots are identical. The iterative approach did not change the quantity of data but improved its granularity. In our earlier Talk To Your Data series, we observed a similar effect when comparing commented vs. uncommented databases. Here, the same commented database was used in both runs. The difference was that the agent built richer context through its own preceding exploration. This also produced two additional charts: average rides by temperature band and average rides by precipitation level.

Finally, we gave Claude a prompt similar to the one-shot version from Approach 1. The MotherDuck workspace updates automatically when the agent uploads new code.

general_to_final.png

The final iteration is markedly more refined than the one-shot result. The agent provides tipping point analyses across temperature, sunshine, and precipitation. The temperature tipping point remains the 18°C mark with identical rider counts, confirming consistency across both approaches.

Pushing Further

With a solid dashboard in place, we explored additional capabilities. We added interactive filters and controls that users can adjust to their needs. The agent also attempted to include a Daily Data Table, which initially threw a Runtime Error. After pointing this out, the agent fixed the issue and delivered a working table showing daily rides, average temperature, sun hours, rain, and wind speed.

We then asked how weather factors combine – for instance, whether sunshine matters more when it is warm – by requesting a multi-factor model showing bike traffic across weather combinations to identify optimal cycling conditions:

sweet_spots.png

And for fun, we asked the agent to restyle the original Dive in a Windows 95 aesthetic:

win95.png

Conclusion

Dives represent a meaningful step forward in making data visualization accessible. By connecting natural language prompts to persistent, live-updating dashboards, they remove much of the friction traditionally associated with data exploration, no SQL expertise or visualization frameworks required.

Our testing revealed one central takeaway: iteration produces better results than one-shot prompts. Both approaches identified the same underlying patterns (the 18°C tipping point, the rain-ridership correlation), but the iterative path yielded more nuanced categorizations and additional analyses. The agent builds better context when it explores the data incrementally. Dives work best as a conversation, not a command.

The bonus experiments demonstrated genuine flexibility, interactive filters, multi-factor models, error recovery through conversation, and even cosmetic restyling all worked through natural language. Dives are not static reports but interactive applications that evolve with user needs.

Two friction points remain. First, the workflow currently requires toggling between a terminal/AI client and the browser, which feels at odds with MotherDuck's otherwise integrated workspace. Second, the absence of role-level security means Dives are not suitable for every enterprise use case. Teams with strict data access requirements need to architect around this limitation using per-user databases.

For teams on MotherDuck Business plans, the value proposition is clear: start with simple questions, iterate through conversation, and arrive at sophisticated, always-current visualizations that live alongside your data. The fact that Dive creation is an SQL function call opens an especially compelling possibility, data pipelines that automatically generate dashboards from their outputs, blurring the line between data processing and data presentation.

share post

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.