This content originally appeared on DEV Community and was authored by Jesse Williams
Jozu recently introduced an On-Premise deployment option for its Orchestrator, giving organizations full control over their ML/AI supply chain. This post offers a closer look at how the architecture works, how it integrates with open standards like OCI and OIDC, and what it enables when deployed inside your own infrastructure.
What Is Jozu Orchestrator On-Premise?
Jozu Orchestrator—also known as Jozu Hub (try Jozu Hub for free here)—is a private, self-managed solution that helps organizations securely manage their machine learning models, data artifacts, and application configurations. At its core, it allows teams to build and push ModelKits, which are OCI-compliant container images that bundle everything needed to train, deploy, or audit a machine learning system.
Each ModelKit is fully versioned, immutable, and contains models, code, datasets, parameters, and metadata. Once published to an internal OCI registry, these images become trackable, reusable assets that can be queried, audited, and deployed across your ML lifecycle.
This On-Premise setup mirrors the functionality of the hosted Jozu ML platform, but runs entirely within your own firewalls—giving you control over infrastructure, storage, and access policies.
What You’ll Need
To get started with Jozu Orchestrator On-Premise, you should already be working with Kubernetes, an OCI-compatible registry (such as Harbor or Docker Registry), and an OIDC-compliant identity provider like Okta, Azure AD, or Google Workspace. You should also be comfortable working with containerized ML assets—whether using ModelKits, MLflow, or similar tooling.
Architecture Overview
At a high level, the system has three major components: the OCI registry, the OIDC provider, and the Jozu Orchestrator itself. The registry handles all ModelKit image storage. The OIDC provider controls authentication. And the orchestrator ties it all together—handling push/pull event handling, indexing, scanning, and exposing a searchable interface for your team.
How ModelKits Flow Through the System
Let’s say one of your data scientists finishes training a model and wants to register it for deployment. Using the Jozu CLI, they run:
kit init
kit push <your-internal-registry>
This packages the model, its dependencies, and metadata into a ModelKit and uploads it to your internal OCI registry. From there, the registry is configured to notify the Jozu Orchestrator of new pushes.
Once that notification is received, the orchestrator springs into action. It caches the new model’s metadata, kicks off background workers to run security scans, and generates signed attestations that are pushed back to the registry. These attestations provide cryptographic proof that the model was scanned and verified—so that downstream systems (or auditors) can trust its integrity.
The orchestrator UI also reflects the update, showing the new ModelKit along with relevant metadata, scan results, and revision history.
Exploring and Deploying ModelKits
Once your models are in the system, they’re easy to find and reuse. Developers and ML engineers can log in to the Jozu Orchestrator UI using their existing OIDC credentials. The system authenticates each user and filters visibility based on their permissions.
From there, users can:
- Search and browse published ModelKits
- View version history and audit trails
- See results of automated scans and attestation reports
- Copy deployment snippets for use in Kubernetes clusters
This creates a single source of truth for all ML/AI assets across your team, while maintaining tight access controls and a clear record of who pushed what, when.
Why It Matters
As machine learning models move from experimentation to production, managing them with the same rigor as traditional software is no longer optional. Jozu Orchestrator helps teams bridge that gap by providing a flexible platform for packaging, securing, and auditing ML assets—on your own infrastructure.
If you're ready to try Jozu Orchestrator On-Premise or want help evaluating how it could fit into your environment, reach out to our team for a guided walkthrough or deployment consultation.
This content originally appeared on DEV Community and was authored by Jesse Williams

Jesse Williams | Sciencx (2025-07-21T13:21:24+00:00) Deploying Jozu On-Premise: Architecture & Workflow Overview. Retrieved from https://www.scien.cx/2025/07/21/deploying-jozu-on-premise-architecture-workflow-overview/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.