Isolated Kubernetes GitOps with FluxCD and OCI Repositories
Introduction: The Challenge of Isolated Environments
Operating Kubernetes in isolated environments presents unique challenges for platform engineering teams. When clusters have no direct access to the public internet, traditional GitOps workflows break down. You cannot pull helm charts from public registries, cannot verify signatures against remote keys, and cannot rely on external sources for runtime images.
At cc cloud GmbH, we developed a robust architecture that brings the entire software supply chain into the isolated network boundary. This approach ensures that every artifact the cluster consumes—from helm charts to container images—is verified, signed, and available within the private network. The architecture is built on two fundamental repositories: Shared Artifacts and Software Releases. Together they create a complete GitOps pipeline that operates entirely within the isolation boundary.
Architecture Overview: The Two-Repository Pattern
The foundation of this architecture is a clear separation of concerns. We maintain two distinct source code repositories, each with a specific role in the software supply chain.
Shared Artifacts is the ingress point. It mirrors upstream helm charts from public registries, packages them as OCI artifacts, pushes them to the private OCI registry, tags them with version identifiers, and signs them with Cosign. This repository runs entirely in GitLab, driven by a GitLab CI pipeline with internet access. It ensures that all required software is available within the private network before any deployment attempt occurs.
Software Releases contains the configuration that defines which software runs in each cluster, in which version, and with which specific configuration. In helm chart terminology, this project contains the helm releases. Platform engineers write configuration of the helm installed to be installed on the cluster here. The GitLab CI pipeline pushes this configuration across the isolation boundary, where FluxCD reads it and applies it to the clusters. Like Shared Artifacts, this repository lives in GitLab and is managed through GitLab CI.
This separation creates a clean handoff point. The GitLab CI pipeline drives both repositories from GitLab: Shared Artifacts brings signed artifacts into the private OCI registry, while Software Releases pushes deployment configuration across the boundary. FluxCD, operating entirely within the isolated zone, reads from both the private OCI registry and the pushed configuration to reconcile cluster state. The entire architecture is based on FluxCD with OCI repositories, using the Flux d2 reference architecture adapted for fully isolated operation.
Shared Artifacts: The Mirror
The Shared Artifacts repository solves the fundamental problem of isolated environments: how to get software inside the boundary. It operates as a continuous mirroring system that watches upstream helm registries and synchronizes them into the private OCI registry.
The mirroring process starts with a CSV configuration file that lists all public helm charts to be mirrored, including the source repository URL, chart name, and optional version constraints. This configuration defines the entire inventory of infrastructure software that will be available within the isolation boundary.
The mirroring pipeline processes this configuration through automated scripts. For each chart, the pipeline performs several operations. It pulls the chart from the upstream registry, extracts the version metadata, and determines if a new version needs to be mirrored. For raw OCI artifacts such as CRDs, it extracts the manifests and pushes them using the Flux CLI. For standard helm charts, it optionally injects test manifests, then pushes the chart to the OCI registry using Helm's OCI support.
Once pushed, each artifact is tagged with its semantic version and signed with Cosign. The signing uses a private key that is injected as a CI variable. This creates private tagged and signed copies of all upstream helm charts. The signatures provide cryptographic proof of origin. The public key is distributed to clusters as a Kubernetes Secret, enabling signature verification during deployment.
Software Releases: The Configurations
Software Releases lives in GitLab alongside Shared Artifacts. It contains the declarative configuration that FluxCD uses to manage cluster state: which software, in which version, and with which configuration runs in each cluster. In helm chart terminology, this project contains the helm releases. Platform engineers write and review configuration through GitLab merge requests, and the GitLab CI pipeline pushes the configuration across the isolation boundary into the isolated zone where FluxCD picks it up.
The deployment orchestration uses FluxCD ResourceSets. A ResourceSet is a FluxCD primitive that creates multiple related resources as a unit. Each infrastructure component has its own ResourceSet definition. The ResourceSet creates several resources: a Namespace for the component, a ServiceAccount and ClusterRoleBinding that grant FluxCD the permissions needed to manage resources, an OCIRepository that points to the private OCI registry, and Kustomizations that apply the configuration and controller manifests.
The OCIRepository resource includes a verification section that ensures only signed artifacts are deployed. It references the private OCI registry where charts were pushed, tagged, and signed by the Shared Artifacts pipeline. The verification checks signatures using the Cosign public key. If the signature does not match, FluxCD marks the source as failed and refuses to deploy. This prevents tampering with charts in the registry. As well FluxCD provides multitenant operations with sharding support.
Deployment Order and Dependencies
Infrastructure components have dependencies. E.g. the CNI must be ready before network policies can be applied. CRDs must be installed before controllers that use them. Cert-Manager must be available before any component that needs TLS certificates.
The deployment order is orchestrated through a dedicated ResourceSet that defines four deployment phases to ensure infrastructure comes up in the correct sequence:
The first phase deploys core infrastructure such as Cilium and Kyverno, establishing the network layer and admission control. The second phase handles security components such as cluster roles and RBAC configurations. The third phase covers DNS, ingress, and certificate management. The fourth phase deploys the observability stack.
Each phase declares dependency constraints that reference Kustomizations from previous phases. FluxCD respects these constraints and will not proceed to the next phase until the declared dependencies are ready. This ensures that infrastructure comes up in the correct order without manual intervention.
Environment Promotion: From Dev to Prod
Changes flow through three environments: dev → test → prod. The promotion pipeline is automated through Renovate and GitLab CI.
Renovate monitors the Software Releases repository for new chart versions. When an update is available, it creates a merge request with appropriate labels. Labels include the environment, update type, and promotion tracking status.
When an update is promoted to the next environment, the label changes to reflect the target environment. The promotion pipeline creates one-to-one merge requests that carry the exact version from one environment to the next. This ensures that what was tested in dev is exactly what deploys to test and eventually prod.
Auto-merge configurations enable non-disruptive updates for patch versions while requiring manual review for major changes. This balance allows the platform to stay current with security patches while maintaining stability for significant version changes.
Security and Trust
Security in isolated environments requires defense in depth. Every layer of the architecture includes verification mechanisms.
Cosign Signing and Verification
All artifacts pushed to the Private OCI Registry are signed with Cosign. The private key is managed as a GitLab CI secret and is never exposed in the repository. The public key is distributed to clusters as a Kubernetes Secret in each component namespace.
FluxCD's OCIRepository resource checks the signature on every sync. If the signature is invalid or missing, FluxCD marks the source as failed and will not apply any changes from that artifact. This prevents tampering with charts in the registry.
The Pull-Through Cache: The Final Closing
Container images referenced by helm charts must also be available within the isolation boundary. The architecture solves this with a pull-through cache mechanism.
IaC creates rules to cover the upstream registries used by virtually all infrastructure helm charts: quay.io, registry.k8s.io or Docker Hub. These rules allow the private registry to proxy and cache images from upstream sources.
Kyverno is installed as an admissions controller overriding the image repository path. It mutates pod specifications to redirect image pulls from public registries to the private pull-through cache. This ensures that even if a helm chart references a public image URL, the cluster actually pulls from the private registry where images are subject to security scanning.
Workload Identity
FluxCD components authenticate to the private OCI registry using workload identity. This provides a clean separation between Kubernetes Service Accounts and the credentials required to access external services.
When FluxCD pulls charts or images, it uses the identity associated with its service account. This ensures that access to the private registry is tied to the workload itself rather than to static secrets that could be leaked or rotated manually.
Network Isolation with Cilium
Cilium provides the CNI and network policies that enforce zero-trust networking within the cluster. Each component namespace has specific CiliumNetworkPolicies that restrict egress to only the required destinations. The network policies ensure that even if a component is compromised, it cannot exfiltrate data to unauthorized endpoints.
The deployment order ensures that Cilium and its network policies are installed before any workload components. This prevents a window where workloads run without network policy enforcement.
Conclusion
Isolated Kubernetes GitOps requires a rethinking and splitting of the software supply chain. The two-repository pattern separates artifact acquisition from deployment configuration. Both repositories live in GitLab and are driven by the GitLab CI pipeline. Shared Artifacts brings software into the isolation boundary, verifies it, and makes it available as signed OCI artifacts. Software Releases is pushed across the boundary by GitLab CI and orchestrates deployment using FluxCD ResourceSets, dependency ordering, and signature verification.
The architecture demonstrates that isolated environments can achieve the same automation and security standard as online environments. The key is moving the entire supply chain—including signing, verification, and promotion workflows—inside the network boundary. With proper artifact mirroring, dependency management, and security controls, isolated clusters can operate with confidence that every deployed component is exactly what was intended.
The next posts in this series will explore details of the implementation of the core concepts shown in this introduction.
More articles in this subject area
Discover exciting further topics and let the codecentric world inspire you.
Blog author
Sven Hertzberg
IT Consultant Cloud
Do you still have questions? Just send me a message.
Do you still have questions? Just send me a message.