Define predictive models in TypeScript. Compile to immutable ONNX artifacts at build time. Run inference in-process — no Python sidecars, no HTTP overhead, no training-serving skew.
npm install @vncsleal/prisml Workflow
Models are defined as code, compiled at build time, and executed in-process. The same feature resolvers used for training are reused at inference — eliminating skew by construction.
import { defineModel } from '@vncsleal/prisml';
export const churnModel = defineModel<User>({
name: 'churnRisk',
modelName: 'User',
output: {
field: 'willChurn',
taskType: 'binary_classification',
resolver: (user) => user.cancelledAt !== null,
},
features: {
daysActive: (u) =>
(Date.now() - u.createdAt.getTime()) / 86_400_000,
loginFrequency: (u) => u.logins.length,
planTier: (u) => u.subscription.tier,
monthlySpend: (u) => u.totalSpend / u.monthsActive,
},
algorithm: { name: 'forest', version: '1.0.0' },
qualityGates: [
{ metric: 'f1', threshold: 0.85, comparison: 'gte' },
],
}); $ prisml train \
--config ./prisml.config.ts \
--schema ./prisma/schema.prisma
▸ Analyzing feature resolvers...
▸ Extracting: daysActive, loginFrequency,
planTier, monthlySpend
▸ Schema hash: a3f8c1...bound
▸ Training churnRisk (forest v1.0.0)
▸ Quality gate: f1 ≥ 0.85 → 0.91 ✓
▸ Artifact: ./artifacts/churnRisk.onnx (48KB)
▸ Metadata: ./artifacts/churnRisk.meta.json import { PredictionSession } from '@vncsleal/prisml';
import { churnModel } from './models/churn';
const session = new PredictionSession();
await session.initializeModel(
'churnRisk',
'./artifacts/churnRisk.metadata.json',
'./artifacts/churnRisk.onnx',
);
const result = await session.predict(
'churnRisk',
user,
churnModel.features,
);
console.log(result.prediction);
// → "1" (will churn)
console.log(result.confidence);
// → 0.87 Capabilities
Models are declared in TypeScript and compiled to ONNX at build time. If your Prisma schema changes, your build fails — catching drift before deployment.
Predictions run inside your V8 runtime via ONNX Runtime. No Python microservices, no HTTP latency, no serialization overhead.
Every model artifact is bound to a Prisma schema via SHA-256 hash. If the schema drifts after compilation, inference is rejected at runtime.
Feature resolvers are pure functions. The same encoding logic runs during training and inference, eliminating training-serving skew by construction.
Define metric thresholds (RMSE, F1, accuracy) in your model config. If training doesn't meet the gate, the build fails — no silent regressions.
Every failure mode has a named error class — SchemaDriftError, HydrationError, ArtifactError. No opaque runtime exceptions.
Architecture
PrisML treats machine learning models the same way a compiler treats source code — as deterministic transformations from input to artifact.
Use defineModel() to specify your model: target Prisma model, feature resolvers as pure functions,
algorithm choice, and quality constraints. No data access — purely declarative.
The CLI analyzes feature resolvers via AST, extracts access patterns, and materializes a training dataset. A Python backend trains the model and exports a versioned ONNX artifact + metadata pair.
At runtime, PredictionSession loads the ONNX model, validates the schema hash,
and runs predictions synchronously in-process. The same feature resolvers ensure encoding parity.
Add PrisML to your TypeScript project and define your first model in minutes.