You’ve trained a classification model. The metrics look good. But what does it actually do?

The best way to understand a model is to use it. Give it inputs. See what it predicts. Try different values. Build intuition for how it behaves.

Kanva’s new “Use Model” tab makes this immediate.

From metrics to understanding

Training metrics tell you aggregate performance. Accuracy, precision, recall, F1—these numbers summarize how the model did on test data.

But they don’t show you how the model thinks.

  • What happens when you change one input?
  • Which features matter most for this specific case?
  • How confident is the model in edge cases?
  • Does the behavior match your domain knowledge?

Interactive prediction answers these questions. You explore the model hands-on.

How it works

Open any trained classification project. The “Use Model” tab shows:

An input form. Every feature the model uses appears as a field. Text inputs, dropdowns for categorical values, number fields for numerics. The form matches your model’s schema exactly.

Real-time predictions. Fill in values and click predict. The result appears immediately—the predicted class and the probability score.

Detailed output. For each prediction, see the full probability distribution across classes. See which features pushed the prediction in each direction.

No files to upload. No code to write. Just type and predict.

What this looks like

Say you’ve built a customer churn model. The features include:

  • Account age (months)
  • Monthly spend ($)
  • Support tickets (last 90 days)
  • Contract type (monthly/annual)

You open the Use Model tab and enter a test case:

Feature Value
Account age 8 months
Monthly spend $150
Support tickets 3
Contract type Monthly

Click predict. The model says: Likely to churn (78% probability)

Now you explore. What if they switch to annual? The probability drops to 45%. What if support tickets were zero? Down to 32%. What if they spent $300/month? 28%.

In minutes, you understand how your model weighs different factors. You can explain it to stakeholders. You can sanity-check it against your intuition.

Building trust through exploration

Interactive prediction serves several purposes:

Validation. Does the model behave sensibly? If a high-value, long-term customer shows high churn risk, something might be wrong.

Communication. Show stakeholders exactly what the model does. Let them try their own examples. Concrete demonstrations beat abstract metrics.

Edge cases. What happens with unusual inputs? Extreme values? Missing data? Test the boundaries.

Training. New team members can learn how the model works by experimenting with it.

The model becomes tangible. Not a black box that produces numbers, but a tool you can interact with.

Practical details

A few specifics on the Use Model tab:

  • The input form auto-generates from your model’s feature schema
  • Categorical features show dropdowns with valid values
  • Numeric features accept any value (the model handles ranges)
  • Predictions return both the class and probability scores
  • Feature contributions show what drove each prediction
  • Previous predictions are saved for comparison

This feature is available for classification projects now.

Try it

If you have a trained classification model:

  1. Open the project
  2. Click the “Use Model” tab
  3. Fill in some values
  4. See what the model predicts
  5. Change values and predict again

You’ll learn more about your model in five minutes of exploration than an hour of staring at metrics.


Interactive predictions are available now for classification projects. Questions? Reach out at hello@human-driven.ai