@vncsleal / prisml

Machine learning
that compiles.

Define predictive models in TypeScript. Compile to immutable ONNX artifacts at build time. Run inference in-process — no Python sidecars and no HTTP overhead.

$ npm install @vncsleal/prisml
Live demo View source
Define
TypeScript
Compile
Build time
Validate
Quality gates
Predict
In-process

Three files. Zero infrastructure.

Models are defined as code, compiled at build time, and executed in-process. Preflight validation catches config and environment issues early, before training crosses the Python boundary.

import { defineModel } from '@vncsleal/prisml';

export const churnModel = defineModel<User>({
  name: 'churnRisk',
  modelName: 'User',
  output: {
    field: 'willChurn',
    taskType: 'binary_classification',
    resolver: (user) => user.cancelledAt !== null,
  },
  features: {
    daysActive: (u) =>
      (Date.now() - u.createdAt.getTime()) / 86_400_000,
    loginFrequency: (u) => u.logins.length,
    planTier: (u) => u.subscription.tier,
    monthlySpend: (u) => u.totalSpend / u.monthsActive,
  },
  // algorithm is optional — omit it and FLAML AutoML
  // selects the best estimator for you automatically.
  qualityGates: [
    { metric: 'f1', threshold: 0.85, comparison: 'gte' },
  ],
});
$ prisml train

 Loading Prisma schema...
 Schema loaded
 Loading model definitions...
 Loaded 1 model definition(s)
 Running preflight validation...
 Preflight validation passed
 Checking Python environment...
 Python environment OK (flaml, sklearn, skl2onnx)
 Validating models...
 Models validated
 Extracting training data via Prisma...
 Training churnRisk (FLAML AutoML, 60s budget)...
 Best estimator: LGBMClassifier
 Writing artifacts...
 Artifacts written to ./.prisml

[OK] Training complete

Artifacts:
  churnRisk.metadata.json
  churnRisk.onnx
import { PredictionSession } from '@vncsleal/prisml';
import { churnModel } from './models/churn';

const session = new PredictionSession();

await session.load(churnModel);

const result = await session.predict(churnModel, user);

console.log(result.prediction);
// → "1" (will churn)
console.log(result.timestamp);
// → "2026-02-20T16:00:00.000Z"

Built for production correctness.

Compiler-first

Models are declared in TypeScript and compiled to ONNX at build time. Artifacts carry a Prisma schema hash so drift is caught before serving predictions.

In-process inference

Predictions run inside your V8 runtime via ONNX Runtime. No Python microservices, no HTTP latency, no serialization overhead.

Schema-bound

Every model artifact is bound to a Prisma schema via SHA-256 hash. If the schema drifts after compilation, inference is rejected at runtime.

Deterministic encoding

Feature resolvers are pure functions. Training and inference use the same encoding rules from model metadata for consistent predictions.

Quality gates

Define metric thresholds (RMSE, F1, accuracy) in your model config. If training doesn't meet the gate, the training command fails — no silent regressions.

Typed errors

Core failure modes expose named error classes such as SchemaDriftError, HydrationError, and ArtifactError.

Schema validation

Run prisml check to validate trained model contracts against your current Prisma schema and detect incompatibilities quickly.

Batch prediction

Process entire datasets in a single atomic call with session.predictBatch(). Either all entities succeed or the batch throws — no silent partial failures.

Models as build artifacts.

PrisML treats machine learning models the same way a compiler treats source code — as deterministic transformations from input to artifact.

01

Declaration

Use defineModel() to specify your model: target Prisma model, feature resolvers as pure functions, algorithm choice, and quality constraints. No data access — purely declarative.

defineModel() TypeScript Type-safe
02

Compilation

The CLI loads model definitions, extracts training data via Prisma, and materializes a training dataset. A Python backend trains the model and exports a versioned ONNX artifact + metadata pair.

Prisma extraction ONNX export Schema hash
03

Inference

At runtime, PredictionSession loads the ONNX model, validates the schema hash, and runs predictions synchronously in-process. The same feature resolvers ensure encoding parity.

ONNX Runtime In-process Deterministic

Task types

  • Regression
  • Binary classification
  • Multiclass classification

Algorithms

  • AutoML (FLAML, default)
  • Linear / Logistic
  • Decision tree
  • Random forest
  • Gradient boosting

Feature types

  • Numeric (standard scaling)
  • Boolean (0/1)
  • String (one-hot encoding)
  • Date (timestamp)
  • Null (imputation)

Start building.

Add PrisML to your TypeScript project and define your first model in minutes.