Define predictive models in TypeScript. Compile to immutable ONNX artifacts at build time. Run inference in-process — no Python sidecars and no HTTP overhead.
Workflow
Models are defined as code, compiled at build time, and executed in-process. Preflight validation catches config and environment issues early, before training crosses the Python boundary.
import { defineModel } from '@vncsleal/prisml';
export const churnModel = defineModel<User>({
name: 'churnRisk',
modelName: 'User',
output: {
field: 'willChurn',
taskType: 'binary_classification',
resolver: (user) => user.cancelledAt !== null,
},
features: {
daysActive: (u) =>
(Date.now() - u.createdAt.getTime()) / 86_400_000,
loginFrequency: (u) => u.logins.length,
planTier: (u) => u.subscription.tier,
monthlySpend: (u) => u.totalSpend / u.monthsActive,
},
// algorithm is optional — omit it and FLAML AutoML
// selects the best estimator for you automatically.
qualityGates: [
{ metric: 'f1', threshold: 0.85, comparison: 'gte' },
],
}); $ prisml train
◉ Loading Prisma schema...
✔ Schema loaded
◉ Loading model definitions...
✔ Loaded 1 model definition(s)
◉ Running preflight validation...
✔ Preflight validation passed
◉ Checking Python environment...
✔ Python environment OK (flaml, sklearn, skl2onnx)
◉ Validating models...
✔ Models validated
◉ Extracting training data via Prisma...
◉ Training churnRisk (FLAML AutoML, 60s budget)...
✔ Best estimator: LGBMClassifier
◉ Writing artifacts...
✔ Artifacts written to ./.prisml
[OK] Training complete
Artifacts:
churnRisk.metadata.json
churnRisk.onnx import { PredictionSession } from '@vncsleal/prisml';
import { churnModel } from './models/churn';
const session = new PredictionSession();
await session.load(churnModel);
const result = await session.predict(churnModel, user);
console.log(result.prediction);
// → "1" (will churn)
console.log(result.timestamp);
// → "2026-02-20T16:00:00.000Z" Capabilities
Models are declared in TypeScript and compiled to ONNX at build time. Artifacts carry a Prisma schema hash so drift is caught before serving predictions.
Predictions run inside your V8 runtime via ONNX Runtime. No Python microservices, no HTTP latency, no serialization overhead.
Every model artifact is bound to a Prisma schema via SHA-256 hash. If the schema drifts after compilation, inference is rejected at runtime.
Feature resolvers are pure functions. Training and inference use the same encoding rules from model metadata for consistent predictions.
Define metric thresholds (RMSE, F1, accuracy) in your model config. If training doesn't meet the gate, the training command fails — no silent regressions.
Core failure modes expose named error classes such as SchemaDriftError, HydrationError, and ArtifactError.
Run prisml check to validate trained model contracts against your current Prisma schema and detect incompatibilities quickly.
Process entire datasets in a single atomic call with session.predictBatch(). Either all entities succeed or the batch throws — no silent partial failures.
Architecture
PrisML treats machine learning models the same way a compiler treats source code — as deterministic transformations from input to artifact.
Use defineModel() to specify your model: target Prisma model, feature resolvers as pure functions,
algorithm choice, and quality constraints. No data access — purely declarative.
The CLI loads model definitions, extracts training data via Prisma, and materializes a training dataset. A Python backend trains the model and exports a versioned ONNX artifact + metadata pair.
At runtime, PredictionSession loads the ONNX model, validates the schema hash,
and runs predictions synchronously in-process. The same feature resolvers ensure encoding parity.
Add PrisML to your TypeScript project and define your first model in minutes.