A real ONNX model running in-process. Adjust the inputs, see the latency, then break the schema.
Live inference
Each submit hits /api/demo/predict, which calls session.predict()
synchronously in the Node.js route. The latency shown is actual wall-clock ONNX inference time.
Feature inputs
Prediction output
Latency comparison
Scale: 0 – 800 ms (Python subprocess = 100%). PrisML runs ONNX Runtime in-process — no serialisation, no network hop.
Schema drift guard
Every artifact carries a SHA-256 hash of the Prisma schema at compile time. If the schema changes
and you try to initialise predictions, PredictionSession throws a typed
SchemaDriftError before a single inference runs. No silent incorrectness.
$ ls -lh .prisml/
-rw-r--r--52KuserChurn.onnx
-rw-r--r-- 4KuserChurn.metadata.json
# both files are bound to schema hash a3f8c1b2
Simulate schema change
This artifact was compiled against schema hash a3f8c1b2.
Simulate adding a field to the User model and watch the session refuse to proceed.
Session response
// Schema hash matches — inference proceeds normally.
const result = await session.predict(churnModel, user);
// result.prediction → "0"
// result.timestamp → "2026-03-08T12:00:00.000Z"
The session holds a3f8c1b2 — the hash of the schema at compile time.
When calling session.load(), PrisML re-hashes the current schema file and compares.
A match means the feature resolvers are still valid against the live data shape.
What the error looks like
// After adding a new field to schema.prisma:
// model User {
// ...
// subscriptionTier String // ← new field
// }
// session.load() re-hashes the current schema file before loading.
await session.load(churnModel); // schema now hashes to 7d2e9f1a
// ✖ Throws:
// SchemaDriftError: Model 'userChurn' was trained on schema
// hash a3f8c1b2 but the current schema hash is 7d2e9f1a.
// Run `prisml train` to rebuild the artifact. Integration
Define a model in TypeScript, compile once at build time, predict in-process at runtime. The same feature resolvers run during training and inference — the behaviour cannot drift.
import { defineModel } from '@vncsleal/prisml';
export const churnModel = defineModel<User>({
name: 'userChurn',
modelName: 'User',
output: {
field: 'willChurn',
taskType: 'binary_classification',
resolver: (u) => u.willChurn,
},
features: {
daysSinceActive: (u) =>
Math.floor((Date.now() - u.lastActiveAt.getTime()) / 86_400_000),
monthlySpend: (u) => u.monthlySpend,
supportTickets: (u) => u.supportTickets,
},
algorithm: { name: 'gbm' },
qualityGates: [
{ metric: 'f1', threshold: 0.80, comparison: 'gte' },
{ metric: 'accuracy', threshold: 0.85, comparison: 'gte' },
],
}); $ prisml train
◉ Loading Prisma schema...
✔ Schema loaded (hash: a3f8c1b2)
◉ Loading model definitions...
✔ Loaded 1 model definition(s)
◉ Validating models...
✔ Models validated
◉ Extracting training data via Prisma...
◉ Training userChurn (gbm)...
✔ Training complete
◉ Writing artifacts...
✔ Artifacts written to ./.prisml
[OK] Training complete
Artifacts:
userChurn.metadata.json
userChurn.onnx import { PredictionSession } from '@vncsleal/prisml';
import { churnModel } from './models/churn';
const session = new PredictionSession();
await session.load(churnModel); // resolves .prisml/ and prisma/schema.prisma
const result = await session.predict(churnModel, user);
console.log(result.prediction); // "1" or "0"
console.log(result.timestamp); // "2026-03-08T12:00:00.000Z"
// Inference ran in ~0.4ms — in the same V8 process. npm install @vncsleal/prisml