From f7c965adede48a299de476f1b284b22580c64d78 Mon Sep 17 00:00:00 2001
From: salonichf5 <146118978+salonichf5@users.noreply.github.com>
Date: Wed, 13 Aug 2025 10:52:18 -0600
Subject: [PATCH 1/3] update user architecture guide
---
content/ngf/overview/gateway-architecture.md | 429 +++++++------------
1 file changed, 160 insertions(+), 269 deletions(-)
diff --git a/content/ngf/overview/gateway-architecture.md b/content/ngf/overview/gateway-architecture.md
index d51dd6d74..0ecc42782 100644
--- a/content/ngf/overview/gateway-architecture.md
+++ b/content/ngf/overview/gateway-architecture.md
@@ -7,38 +7,23 @@ nd-product: NGF
nd-docs: DOCS-1413
---
-Learn about the architecture and design principles of NGINX Gateway Fabric.
+Learn about the architecture and design principles of NGINX Gateway Fabric -- a Kubernetes Gateway API implementation using NGINX as the data plane.
-The intended audience for this information is primarily the two following groups:
+This document is intended for:
-- _Cluster Operators_ who would like to know how the software works and understand how it can fail.
-- _Developers_ who would like to [contribute](https://github.com/nginx/nginx-gateway-fabric/blob/main/CONTRIBUTING.md) to the project.
+- _Cluster Operators_ who want to understand how NGINX Gateway Fabric works in production, how it manages traffic, and how to troubleshoot failures.
-The reader needs to be familiar with core Kubernetes concepts, such as pods, deployments, services, and endpoints. For an understanding of how NGINX itself works, you can read the ["Inside NGINX: How We Designed for Performance & Scale"](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/) blog post.
-
----
-
-## Overview
-
-NGINX Gateway Fabric is an open source project that provides an implementation of the [Gateway API](https://gateway-api.sigs.k8s.io/) using [NGINX](https://nginx.org/) as the data plane. The goal of this project is to implement the core Gateway APIs -- _Gateway_, _GatewayClass_, _HTTPRoute_, _GRPCRoute_, _TCPRoute_, _TLSRoute_, and _UDPRoute_ -- to configure an HTTP or TCP/UDP load balancer, reverse proxy, or API gateway for applications running on Kubernetes. NGINX Gateway Fabric supports a subset of the Gateway API.
-
-For a list of supported Gateway API resources and features, see the [Gateway API Compatibility]({{< ref "/ngf/overview/gateway-api-compatibility.md" >}}) documentation.
-
-NGINX Gateway Fabric separates the control plane and data plane into distinct deployments. This architectural separation enhances scalability, security, and operational isolation between the two components.
+- _Application Developers_ who would like to use NGINX Gateway Fabric to expose and route traffic to their applications within Kubernetes.
-The control plane interacts with the Kubernetes API, watching for Gateway API resources. When a new Gateway resource is provisioned, it dynamically creates and manages a corresponding NGINX data plane Deployment and Service. This ensures that the system can adapt to changes in the cluster state seamlessly.
-
-Each NGINX data plane pod consists of an NGINX container integrated with the [NGINX agent](https://github.com/nginx/agent). The agent securely communicates with the control plane using gRPC. The control plane translates Gateway API resources into NGINX configurations and sends these configurations to the agent to ensure consistent traffic management.
-
-This design enables centralized management of multiple Gateways while ensuring that each NGINX instance stays aligned with the cluster's current configuration. Labels, annotations, and infrastructure settings such as service type or replica count can be specified globally via the Helm chart or customized per Gateway using the enhanced NginxProxy CRD and the Gateway's `infrastructure` section.
+The reader needs to be familiar with core Kubernetes concepts, such as pods, deployments, services, and endpoints. For an understanding of how NGINX itself works, you can read the ["Inside NGINX: How We Designed for Performance & Scale"](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/) blog post.
-We have more information regarding our [design principles](https://github.com/nginx/nginx-gateway-fabric/blob/v1.6.1/docs/developer/design-principles.md) in the project's GitHub repository.
+If you are interested in contributing to the project or learning about its internal implementation details, please see the [Developer Architecture Guide](https://github.com/nginx/nginx-gateway-fabric/tree/main/docs/architecture).
---
## NGINX Gateway Fabric Deployment Model and Architectural Overview
-The NGINX Gateway Fabric architecture separates the control plane and data plane into distinct and independent Deployments, ensuring enhanced security, flexibility, and resilience.
+NGINX Gateway Fabric splits its architecture into two main parts to provide better security, flexibility, and reliability:
### Control Plane: Centralized Management
@@ -52,43 +37,21 @@ Key functionalities include:
### Data Plane: Autonomous Traffic Management
-Each NGINX data plane pod is provisioned as an independent Deployment containing an `nginx` container. This container runs both the `nginx` process and the [NGINX agent](https://github.com/nginx/agent), which is responsible for:
+Each NGINX data plane pod can be provisioned as an independent Deployment or DaemonSet containing an `nginx` container. This container runs both the `nginx` process and the [NGINX agent](https://github.com/nginx/agent), which is responsible for:
- Applying configurations: The agent receives updates from the control plane and applies them to the NGINX instance.
- Handling reloads: NGINX Agent handles configuration reconciliation and reloading NGINX, eliminating the need for shared volumes or Unix signals between the control plane and data plane pods.
-With this design, multiple NGINX data planes can be managed by a single control plane, enabling fine-grained, Gateway-specific control and isolation.
-
### Gateway Resource Management
-The architecture supports flexible operation and isolation across multiple Gateways:
+Users can have multiple gateways running side-by-side in the same cluster. This supports flexible operation and isolation across Gateways:
- Concurrent Gateways: Multiple Gateway objects can run simultaneously within a single installation.
- 1:1 resource mapping: Each Gateway resource corresponds uniquely to a dedicated data plane deployment, ensuring clear delineation of ownership and operational segregation.
-### Resilience and Fault Isolation
-
-One of the primary advantages of this architecture is enhanced operational resilience and fault isolation:
-
-#### Control Plane Resilience
-
-In the event of a control plane failure or downtime:
-- Existing data plane pods continue serving traffic using their last-valid cached configurations.
-- Updates to routes or Gateways are temporarily paused, but stable traffic delivery continues without degradation.
-- Recovery restores functionality, resynchronizing configuration updates seamlessly.
-
-#### Data Plane Resilience
-
-If a data plane pod encounters an outage or restarts:
-- Only routes tied to the specific linked Gateway object experience brief disruptions.
-- Configurations automatically resynchronize with the data plane upon pod restart, minimizing the scope of impact.
-- Other data plane pods remain unaffected and continue serving traffic normally.
-
-This split architecture ensures operational boundaries between the control plane and data plane, delivering improved scalability, security, and robustness while minimizing risks associated with failures in either component.
-
---
-## High-Level Example of NGINX Gateway Fabric in Action
+## High-Level Overview of NGINX Gateway Fabric in Action
This figure depicts an example of NGINX Gateway Fabric exposing three web applications within a Kubernetes cluster to clients on the internet:
@@ -96,161 +59,119 @@ This figure depicts an example of NGINX Gateway Fabric exposing three web applic
graph LR
%% Nodes and Relationships
subgraph KubernetesCluster[Kubernetes Cluster]
-
- subgraph applications2[Namespace: applications2]
-
- subgraph DataplaneComponentsC[Dataplane Components]
- GatewayC[Gateway C
Listener: *.other-example.com]
-
- subgraph NGINXPodC[NGINX Pod]
- subgraph NGINXContainerC[NGINX Container]
- NGINXProcessC(NGINX)
- NGINXAgentC(NGINX Agent)
- end
+ subgraph ApplicationsNamespaceA[Namespace: applications]
+ subgraph DataplaneComponentsA[Dataplane Components]
+ GatewayA[Gateway A
Listener: *.example.com]
+ subgraph NGINXPodA[NGINX Pod]
+ subgraph NGINXContainerA[NGINX Container]
+ NGINXProcessA(NGINX)
+ NGINXAgentA(NGINX Agent)
end
end
-
- subgraph HTTPRouteCAndApplicationC[HTTPRoute C and Application C]
- HTTPRouteC[HTTPRoute C
Host: c.other-example.com]
- ApplicationC[Application C
Pods: 1]
- end
-
- end
-
- subgraph nginx-gateway[Namespace: nginx-gateway]
- NGFPod[NGF Pod]
+ end
+ subgraph HTTPRouteAAndApplications[HTTPRoutes and Applications]
+ HTTPRouteA[HTTPRoute A\nHost: a.example.com]
+ HTTPRouteB[HTTPRoute B\nHost: b.example.com]
+ ApplicationA[Application A
Pods: 2]
+ ApplicationB[Application B
Pods: 1]
+ end
end
-
- subgraph applications1[Namespace: applications]
-
- subgraph DataplaneComponentsAB[Dataplane Components]
- GatewayAB[Gateway AB
Listener: *.example.com]
-
- subgraph NGINXPodAB[NGINX Pod]
- subgraph NGINXContainerAB[NGINX Container]
- NGINXProcessAB(NGINX)
- NGINXAgentAB(NGINX Agent)
+ subgraph ApplicationsNamespaceB[Namespace: applications-2]
+ subgraph DataplaneComponentsB[Dataplane Components]
+ GatewayB[Gateway B
Listener: *.other-example.com]
+ subgraph NGINXPodB[NGINX Pod]
+ subgraph NGINXContainerB[NGINX Container]
+ NGINXProcessB(NGINX)
+ NGINXAgentB(NGINX Agent)
end
end
end
-
- subgraph HTTPRouteBAndApplicationB[HTTPRoute B and Application B]
- HTTPRouteB[HTTPRoute B
Host: b.example.com]
- ApplicationB[Application B
Pods: 1]
- end
-
- subgraph HTTPRouteAAndApplicationA[HTTPRoute A and Application A]
- HTTPRouteA[HTTPRoute A
Host: a.example.com]
- ApplicationA[Application AB
Pods: 2]
+ subgraph HTTPRouteBandApplications[HTTPRoutes and Applications]
+ HTTPRouteC[HTTPRoute C\nHost: c.other-example.com]
+ ApplicationC[Application C
Pods: 1]
end
end
-
KubernetesAPI[Kubernetes API]
end
-
- subgraph Users[Users]
- ClusterOperator[Cluster Operator]
- AppDevA[Application Developer A]
- AppDevB[Application Developer B]
- AppDevC[Application Developer C]
+ subgraph UsersAndClients[Users and Clients]
+ UserOperator[Cluster Operator]
+ UserDevA[Application Developer A]
+ UserDevB[Application Developer B]
+ ClientA[Client A]
+ ClientB[Client B]
end
-
- subgraph Clients[Clients]
- ClientsA[Clients A]
- ClientsB[Clients B]
- ClientsC[Clients C]
+ subgraph SharedInfrastructure[Public Endpoint]
+ PublicEndpoint[TCP Load Balancer / NodePort]
end
-
- subgraph "Public Endpoints"
- PublicEndpointAB[Public Endpoint AB
TCP Load Balancer/NodePort]
- PublicEndpointC[Public Endpoint C
TCP Load Balancer/NodePort]
- end
-
%% Updated Traffic Flow
- ClientsA == a.example.com ==> PublicEndpointAB
- ClientsB == b.example.com ==> PublicEndpointAB
- ClientsC == c.other-example.com ==> PublicEndpointC
-
- PublicEndpointAB ==> NGINXProcessAB
- PublicEndpointC ==> NGINXProcessC
- NGINXProcessAB ==> ApplicationA
- NGINXProcessAB ==> ApplicationB
- NGINXProcessC ==> ApplicationC
-
+ ClientA == a.example.com ==> PublicEndpoint
+ ClientB == c.other-example.com ==> PublicEndpoint
+ PublicEndpoint ==> NGINXProcessA
+ PublicEndpoint ==> NGINXProcessB
+ NGINXProcessA ==> ApplicationA
+ NGINXProcessA ==> ApplicationB
+ NGINXProcessB ==> ApplicationC
%% Kubernetes Configuration Flow
- HTTPRouteA --> GatewayAB
- HTTPRouteB --> GatewayAB
- HTTPRouteC --> GatewayC
-
- NGFPod --> KubernetesAPI
- NGFPod --gRPC--> NGINXAgentAB
- NGINXAgentAB --> NGINXProcessAB
- NGFPod --gRPC--> NGINXAgentC
- NGINXAgentC --> NGINXProcessC
-
- ClusterOperator --> KubernetesAPI
- AppDevA --> KubernetesAPI
- AppDevB --> KubernetesAPI
- AppDevC --> KubernetesAPI
-
+ HTTPRouteA --> GatewayA
+ HTTPRouteB --> GatewayA
+ HTTPRouteC --> GatewayB
+ UserOperator --> KubernetesAPI
+ NGFPod[NGF Pod] --> KubernetesAPI
+ NGFPod --gRPC--> NGINXAgentA
+ NGFPod --gRPC--> NGINXAgentB
+ NGINXAgentA --> NGINXProcessA
+ NGINXAgentB --> NGINXProcessB
+ UserDevA --> KubernetesAPI
+ UserDevB --> KubernetesAPI
%% Styling
- style ClusterOperator fill:#66CDAA,stroke:#333,stroke-width:2px
- style GatewayAB fill:#66CDAA,stroke:#333,stroke-width:2px
- style GatewayC fill:#66CDAA,stroke:#333,stroke-width:2px
+ style UserOperator fill:#66CDAA,stroke:#333,stroke-width:2px
+ style GatewayA fill:#66CDAA,stroke:#333,stroke-width:2px
+ style GatewayB fill:#66CDAA,stroke:#333,stroke-width:2px
style NGFPod fill:#66CDAA,stroke:#333,stroke-width:2px
-
- style NGINXProcessAB fill:#66CDAA,stroke:#333,stroke-width:2px
- style NGINXProcessC fill:#66CDAA,stroke:#333,stroke-width:2px
-
+ style NGINXProcessA fill:#66CDAA,stroke:#333,stroke-width:2px
+ style NGINXProcessB fill:#66CDAA,stroke:#333,stroke-width:2px
style KubernetesAPI fill:#9370DB,stroke:#333,stroke-width:2px
-
- style HTTPRouteAAndApplicationA fill:#E0FFFF,stroke:#333,stroke-width:2px
- style HTTPRouteBAndApplicationB fill:#E0FFFF,stroke:#333,stroke-width:2px
-
- style AppDevA fill:#FFA07A,stroke:#333,stroke-width:2px
+ style HTTPRouteAAndApplications fill:#E0FFFF,stroke:#333,stroke-width:2px
+ style HTTPRouteBandApplications fill:#E0FFFF,stroke:#333,stroke-width:2px
+ style UserDevA fill:#FFA07A,stroke:#333,stroke-width:2px
style HTTPRouteA fill:#FFA07A,stroke:#333,stroke-width:2px
+ style HTTPRouteB fill:#FFA07A,stroke:#333,stroke-width:2px
style ApplicationA fill:#FFA07A,stroke:#333,stroke-width:2px
- style ClientsA fill:#FFA07A,stroke:#333,stroke-width:2px
-
- style AppDevB fill:#87CEEB,stroke:#333,stroke-width:2px
- style HTTPRouteB fill:#87CEEB,stroke:#333,stroke-width:2px
- style ApplicationB fill:#87CEEB,stroke:#333,stroke-width:2px
- style ClientsB fill:#87CEEB,stroke:#333,stroke-width:2px
+ style ApplicationB fill:#FFA07A,stroke:#333,stroke-width:2px
+ style ClientA fill:#FFA07A,stroke:#333,stroke-width:2px
+ style UserDevB fill:#87CEEB,stroke:#333,stroke-width:2px
+ style HTTPRouteC fill:#87CEEB,stroke:#333,stroke-width:2px
+ style ApplicationC fill:#87CEEB,stroke:#333,stroke-width:2px
+ style ClientB fill:#87CEEB,stroke:#333,stroke-width:2px
+ style PublicEndpoint fill:#FFD700,stroke:#333,stroke-width:2px
+```
- style AppDevC fill:#FFC0CB,stroke:#333,stroke-width:2px
- style HTTPRouteC fill:#FFC0CB,stroke:#333,stroke-width:2px
- style ApplicationC fill:#FFC0CB,stroke:#333,stroke-width:2px
- style ClientsC fill:#FFC0CB,stroke:#333,stroke-width:2px
+{{< call-out "note" >}} The figure does not show many of the necessary Kubernetes resources the Cluster Operators and Application Developers need to create, like deployment and services. {{< /call-out >}}
- style PublicEndpointAB fill:#FFD700,stroke:#333,stroke-width:2px
- style PublicEndpointC fill:#FFD700,stroke:#333,stroke-width:2px
+The figure shows:
- %% Styling
- classDef dashedSubgraph stroke-dasharray: 5, 5;
+{{< bootstrap-table "table table-bordered table-striped table-responsive" >}}
- %% Assign Custom Style Classes
- class DataplaneComponentsAB dashedSubgraph;
- class DataplaneComponentsC dashedSubgraph;
-```
+| **Category** | **Description** |
+|-------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **Namespaces** | - _Namespace: applications_: Contains Gateway A for `*.example.com`, handling Application A and Application B.
- _Namespace: applications-2_: Contains Gateway B for `*.other-example.com`, handling Application C. |
+| **Users** | - _Cluster Operator_: Sets up NGF and manages Gateway API resources by provisioning Gateways (A and B) and the NGF Pod.
- _Developers A & B_: Developers deploy their applications and create HTTPRoutes. |
+| **Clients** | - _Client A_: Interacts with Application A through `a.example.com`.
- _Client B_: Interacts with Application C through `c.other-example.com`. |
+| **NGF Pod** | The control plane component, deployed in `nginx-gateway`, communicates with the Kubernetes API to:
- Fetch Gateway API resources.
- Dynamically manage NGINX data plane deployments. |
+| **Gateways** | - _Gateway A_: Listens for requests under `*.example.com`. Routes:
• _HTTPRoute A_: Routes `a.example.com` to Application A.
• _HTTPRoute B_: Routes `b.example.com` to Application B.
- _Gateway B_: Listens for requests under `*.other-example.com`. Routes:
• _HTTPRoute C_: Routes `c.other-example.com` to Application C. |
+| **Applications** | - _Application A_: Deployed by Developer A (2 pods) and routed to by Gateway A.
- _Application B_: Deployed by Developer A (1 pod) and routed to by Gateway A.
- _Application C_: Deployed by Developer B (1 pod) and routed to by Gateway B. |
+| **NGINX Pods** | - _NGINX Pod A_: Handles traffic from Gateway A:
• _NGINX Process A_: Routes to Application A and Application B.
• _NGINX Agent A_: Updates configuration via gRPC.
- _NGINX Pod B_: Handles traffic from Gateway B:
• _NGINX Process B_: Routes to Application C.
• _NGINX Agent B_: Updates configuration via gRPC. |
+| **Traffic Flow** | - _Client A_:
1. Sends requests to `a.example.com` via Public Endpoint.
2. Routed to Application A by NGINX Process A.
- _Client B_:
1. Sends requests to `c.other-example.com` via Public Endpoint.
2. Routed to Application C by NGINX Process B. |
+| **Public Endpoint** | A shared entry point (TCP Load Balancer or NodePort) that exposes services externally and forwards client traffic into the cluster. |
+| **Kubernetes API** | Acts as the central hub for resource management:
- Fetches Gateway API resources.
- Updates NGINX configuration dynamically via the NGF Pod. |
-{{< call-out "note" >}} The figure does not show many of the necessary Kubernetes resources the Cluster Operators and Application Developers need to create, like deployment and services. {{< /call-out >}}
+{{% /bootstrap-table %}}
-The figure shows:
-- A _Kubernetes cluster_.
-- Users _Cluster Operator_, _Application Developer A_, _B_ and _C_. These users interact with the cluster through the Kubernetes API by creating Kubernetes objects.
-- _Clients A_, _B_, and _C_ connect to _Applications A_, _B_, and _C_ respectively, which the developers have deployed.
-- The _NGF Pod_, [deployed by _Cluster Operator_]({{< ref "/ngf/install/">}}) in the namespace _nginx-gateway_. For scalability and availability, you can have multiple replicas. The _NGF_ container interacts with the Kubernetes API to retrieve the most up-to-date Gateway API resources created within the cluster. When a new Gateway resource is provisioned, the control plane dynamically creates and manages a corresponding NGINX data plane Deployment and Service. It watches the Kubernetes API and dynamically configures these _NGINX_ deployments based on the Gateway API resources, ensuring proper alignment between the cluster state and the NGINX configuration.
-- The _NGINX Pod_ consists of an NGINX container and the integrated NGINX agent, which securely communicates with the control plane over gRPC. The control plane translates Gateway API resources into NGINX configuration, and sends the configuration to the agent.
-- Gateways _Gateway AB_ and _Gateway C_, created by _Cluster Operator_, request points where traffic can be translated to Services within the cluster. _Gateway AB_, includes a listener with a hostname `*.example.com`. _Gateway C_, includes a listener with a hostname `*.other-example.com`. Application Developers have the ability to attach their application's routes to the _Gateway AB_ if their application's hostname matches `*.example.com`, or to _Gateway C_ if their application's hostname matches `*.other-example.com`
-- _Application A_ with two pods deployed in the _applications_ namespace by _Application Developer A_. To expose the application to its clients (_Clients A_) via the host `a.example.com`, _Application Developer A_ creates _HTTPRoute A_ and attaches it to `Gateway AB`.
-- _Application B_ with one pod deployed in the _applications_ namespace by _Application Developer B_. To expose the application to its clients (_Clients B_) via the host `b.example.com`, _Application Developer B_ creates _HTTPRoute B_ and attaches it to `Gateway AB`.
-- _Application C_ with one pod deployed in the _applications2_ namespace by _Application Developer C_. To expose the application to its clients (_Clients C_) via the host `c.other-example.com`, _Application Developer C_ creates _HTTPRoute C_ and attaches it to `Gateway C`.
-- _Public Endpoint AB_, and _Public Endpoint C_ and which fronts the _NGINX AB_, and _NGINX C_ pods respectively. A public endpoint is typically a TCP load balancer (cloud, software, or hardware) or a combination of such load balancer with a NodePort service. _Clients A_ and _B_ connect to their applications via the _Public Endpoint AB_, and _Clients C_ connect to their applications via the _Public Endpoint C_.
-- The bold arrows represent connections related to the client traffic. Note that the traffic from _Clients C_ to _Application C_ is completely isolated from the traffic between _Clients A_ and _B_ and _Application A_ and _B_ respectively.
-
-The resources within the cluster are color-coded based on the user responsible for their creation.
-For example, the Cluster Operator is denoted by the color green, indicating they create and manage all the green resources.
+_Color Coding_ :
+ - Cluster Operator resources (e.g., NGINX Gateway Fabric, NGINX Pods and Gateways) are marked in _green_.
+ - Resources owned by _Application Developer A_ (e.g., HTTPRoute A, Application A) are marked in _orange_.
+ - Resources owned by _Application Developer B_ (e.g., HTTPRoute B, Application C) are marked in _blue_.
---
@@ -260,125 +181,95 @@ For example, the Cluster Operator is denoted by the color green, indicating they
graph LR
%% Main Components
KubernetesAPI[Kubernetes API]
- PrometheusMonitor[Prometheus]
F5Telemetry[F5 Telemetry Service]
+ PrometheusMonitor[Prometheus]
NGFPod[NGF Pod]
- NGINXPod[NGINX Pod]
+ NGINXAgent[NGINX Agent]
+ NGINXMaster[NGINX Master]
+ NGINXWorker[NGINX Worker]
+ ConfigFiles[Config Files]
Client[Client]
- Backend[Backend]
-
- %% NGINX Pod Grouping
- subgraph NGINXPod[NGINX Pod]
- NGINXAgent[NGINX Agent]
- NGINXMaster[NGINX Master]
- NGINXWorker[NGINX Worker]
- ConfigFiles[Config Files]
- ContainerRuntimeNGINX[stdout/stderr]
- end
-
- subgraph NGFPod[NGF Pod]
- NGFProcess[NGF Process]
- ContainerRuntimeNGF[stdout/stderr]
- end
-
- %% External Components Grouping
- subgraph ExternalComponents[.]
- KubernetesAPI[Kubernetes API]
- PrometheusMonitor[Prometheus]
- F5Telemetry[F5 Telemetry Service]
- end
-
- %% HTTPS: Communication with Kubernetes API
- NGFProcess -- "(1) Reads Updates" --> KubernetesAPI
- NGFProcess -- "(1) Writes Statuses" --> KubernetesAPI
-
- %% Prometheus: Metrics Collection
- PrometheusMonitor -- "(2) Fetches controller-runtime metrics" --> NGFPod
- PrometheusMonitor -- "(5) Fetches NGINX metrics" --> NGINXWorker
+ BackendApplication[Backend Application]
- %% Telemetry: Product telemetry data
- NGFProcess -- "(3) Sends telemetry data" --> F5Telemetry
+ %% High-Level Configuration Flow
+ KubernetesAPI -->|"(1) Updates Resources"| NGFPod
+ NGFPod -->|"(2) Sends Configuration Metadata via gRPC"| NGINXAgent
+ NGINXAgent -->|"(3) Validates & Writes Configuration"| ConfigFiles
+ NGINXAgent -->|"(4) Signals NGINX Master to Reload"| NGINXMaster
- %% File I/O: Logging
- NGFProcess -- "(4) Write logs" --> ContainerRuntimeNGF
- NGINXMaster -- "(11) Write logs" --> ContainerRuntimeNGINX
- NGINXWorker -- "(12) Write logs" --> ContainerRuntimeNGINX
+ %% Prometheus Monitoring
+ PrometheusMonitor -->|"(5) Fetches Metrics from NGINX"| NGINXWorker
- %% gRPC: Configuration Updates
- NGFProcess -- "(6) Sends Config to Agent" --> NGINXAgent
- NGINXAgent -- "(7) Validates & Writes Config & TLS Certs" --> ConfigFiles
- NGINXAgent -- "(8) Reloads NGINX" --> NGINXMaster
- NGINXAgent -- "(9) Sends DataPlaneResponse" --> NGFProcess
+ %% Telemetry Data
+ NGFPod -->|"(6) Sends Telemetry Data"| F5Telemetry
- %% File I/O: Configuration and Secrets
- NGINXMaster -- "(10) Reads TLS Secrets" --> ConfigFiles
- NGINXMaster -- "(11) Reads nginx.conf & NJS Modules" --> ConfigFiles
-
- %% Signals: Worker Lifecycle Management
- NGINXMaster -- "(14) Manages Workers (Update/Shutdown)" --> NGINXWorker
-
- %% Traffic Flow
- Client -- "(15) Sends Traffic" --> NGINXWorker
- NGINXWorker -- "(16) Routes Traffic" --> Backend
+ %% Client Traffic Flow
+ Client -->|"(7) Sends Traffic"| NGINXWorker
+ NGINXWorker -->|"(8) Routes Traffic"| BackendApplication
%% Styling
classDef important fill:#66CDAA,stroke:#333,stroke-width:2px;
classDef metrics fill:#FFC0CB,stroke:#333,stroke-width:2px;
classDef io fill:#FFD700,stroke:#333,stroke-width:2px;
- classDef signal fill:#87CEEB,stroke:#333,stroke-width:2px;
- style ExternalComponents fill:transparent,stroke-width:0px
-
- %% Class Assignments for Node Colors
- class NGFPod,KubernetesAPI important;
+ class KubernetesAPI,NGFPod important;
class PrometheusMonitor,F5Telemetry metrics;
- class ConfigFiles,NGINXMaster,NGINXWorker,NGINXAgent io;
- class Client,Backend signal;
+ class NGINXAgent,NGINXMaster,NGINXWorker,ConfigFiles io;
+ class Client,BackendApplication important;
```
-The following list describes the connections, preceeded by their types in parentheses. For brevity, the suffix "process" has been omitted from the process descriptions.
-
-1. (HTTPS)
- - Read: _NGF_ reads the _Kubernetes API_ to get the latest versions of the resources in the cluster.
- - Write: _NGF_ writes to the _Kubernetes API_ to update the handled resources' statuses and emit events. If there's more than one replica of _NGF_ and [leader election](https://github.com/nginx/nginx-gateway-fabric/tree/v1.6.1/charts/nginx-gateway-fabric#configuration) is enabled, only the _NGF_ pod that is leading will write statuses to the _Kubernetes API_.
-1. (HTTP, HTTPS) _Prometheus_ fetches the `controller-runtime` metrics via an HTTP endpoint that _NGF_ exposes (`:9113/metrics` by default).
-Prometheus is **not** required by NGINX Gateway Fabric, and its endpoint can be turned off.
-1. (HTTPS) NGF sends [product telemetry data]({{< ref "/ngf/overview/product-telemetry.md" >}}) to the F5 telemetry service.
-1. (File I/O) _NGF_ writes logs to its _stdout_ and _stderr_, which are collected by the container runtime.
-1. (HTTP, HTTPS) _Prometheus_ fetches the NGINX metrics via an HTTP endpoint that _NGINX_ exposes (`:9113/metrics` by default). Prometheus is **not** required by NGINX, and its endpoint can be turned off.
-1. (gRPC) _NGF_ generates NGINX _configuration_ based on the cluster resources and sends them to _NGINX Agent_ over a secure gRPC connection.
- - NGF sends a message containing file metadata to all pods (subscriptions) for the deployment.
- - Agent receives a ConfigApplyRequest with the list of file metadata.
- - Agent calls GetFile for each file in the list, which NGF sends back to the agent.
-1. (File I/O)
- - Write: __NGINX Agent_ validates the received configuration, and then writes and applies the config if valid. It also writes _TLS certificates_ and _keys_ from [TLS secrets](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets) referenced in the accepted Gateway resource.
-1. (Signal) To reload NGINX, Agent sends the reload signal to the NGINX master.
-1. (gRPC) Agent responds to NGF with a DataPlaneResponse.
-1. (File I/O)
- - Read: The _NGINX master_ reads _configuration files_ and the _TLS cert and keys_ referenced in the configuration when it starts or during a reload.
-1. (File I/O)
- - Read: The _NGINX master_ reads the `nginx.conf` file from the `/etc/nginx` directory. This [file](https://github.com/nginx/nginx-gateway-fabric/blob/v1.6.1/internal/mode/static/nginx/conf/nginx.conf) contains the global and http configuration settings for NGINX. In addition, _NGINX master_ reads the NJS modules referenced in the configuration when it starts or during a reload. NJS modules are stored in the `/usr/lib/nginx/modules` directory.
-1. (File I/O) The _NGINX master_ sends logs to its _stdout_ and _stderr_, which are collected by the container runtime.
-1. (File I/O) An _NGINX worker_ writes logs to its _stdout_ and _stderr_, which are collected by the container runtime.
-1. (Signal) The _NGINX master_ controls the [lifecycle of _NGINX workers_](https://nginx.org/en/docs/control.html#reconfiguration) it creates workers with the new configuration and shutdowns workers with the old configuration.
-1. (HTTP, HTTPS) A _client_ sends traffic to and receives traffic from any of the _NGINX workers_ on ports 80 and 443.
-1. (HTTP, HTTPS) An _NGINX worker_ sends traffic to and receives traffic from the _backends_.
+The following table describes the connections, preceeded by their types in parentheses. For brevity, the suffix "process" has been omitted from the process descriptions.
+
+{{< bootstrap-table "table table-bordered table-striped table-responsive" >}}
+| # | Component/Protocol | Description |
+| ---| ----------------------- | ------------------------------------------------------------------------------------------------------------ |
+| 1 | Kubernetes API (HTTPS) | _Kubernetes API → NGF Pod_: The NGF Pod watches the Kubernetes API for Gateway API resources (e.g., Gateways, HTTPRoutes) to fetch the latest cluster configuration. |
+| 2 | gRPC | _NGF Pod → NGINX Agent_: The NGF Pod processes the Gateway API resources, generates NGINX configuration metadata, and sends it securely to the NGINX Agent over gRPC. |
+| 3 | File I/O | _NGINX Agent → Config Files_: The NGINX Agent validates the configuration and writes the configuration files. |
+| 4 | Signal | _NGINX Agent → NGINX Master_: The NGINX Agent signals the NGINX Master process to reload the updated configuration. This ensures the updated routes and settings are live. |
+| 5 | HTTP/HTTPS | _Prometheus → NGINX Worker_: Prometheus collects runtime metrics (e.g., traffic stats, request details) directly from the NGINX Worker via the HTTP endpoint (`:9113/metrics`). This data helps monitor the operational state of the data plane. |
+| 6 | HTTPS | _NGF Pod → F5 Telemetry Service_: The NGF Pod sends system and usage telemetry data (like API requests handled, error rates, and performance metrics) to the F5 Telemetry Service for analytics. |
+| 7 | HTTP, HTTPS | _Client → NGINX Worker_: Clients send traffic (e.g., HTTP/HTTPS requests) to the NGINX Worker. These are typically routed via a public LoadBalancer or NodePort to expose NGINX. |
+{{% /bootstrap-table %}}
---
-### Differences with NGINX Plus
+### NGINX Plus Features and Benefits
+
+NGINX Gateway Fabric supports both NGINX Open Source and NGINX Plus. While the previous diagram shows NGINX Open Source, using NGINX Plus provides additional capabilities, including:
-The previous diagram depicts NGINX Gateway Fabric using NGINX Open Source. NGINX Gateway Fabric with NGINX Plus has the following difference:
+- The ability for administrators to connect to the NGINX Plus API on port 8765 (restricted to localhost by default).
+- Dynamic updates to upstream servers without requiring a full reload:
+ - Changes to upstream servers, such as application scaling (e.g., adding or removing pods in Kubernetes), can be applied using the [NGINX Plus API](http://nginx.org/en/docs/http/ngx_http_api_module.html).
+ - This reduces the frequency of configuration reloads, minimizing potential disruptions and improving system stability during updates.
-- An _admin_ can connect to the NGINX Plus API using port 8765. NGINX only allows connections from localhost.
+These features contribute to reduced downtime, improved performance during scaling events, and more fine-grained control over traffic management.
---
-## Updating upstream servers
+### Resilience and Fault Isolation
+
+This architecture separates the control plane and data plane, creating clear operational boundaries that improve resilience and fault isolation. It also enhances scalability, security, and reliability while reducing the risk of failures affecting both components.
-The normal process to update any changes to NGINX is to write the configuration files and reload NGINX. However, when using NGINX Plus, we can take advantage of the [NGINX Plus API](http://nginx.org/en/docs/http/ngx_http_api_module.html) to limit the amount of reloads triggered when making changes to NGINX. Specifically, when the endpoints of an application in Kubernetes change (Such as scaling up or down), the NGINX Plus API is used to update the upstream servers in NGINX with the new endpoints without a reload. This reduces the potential for a disruption that could occur when reloading.
+#### Control Plane Resilience
+
+In the event of a control plane failure or downtime:
+- Existing data plane pods continue serving traffic using their last-valid cached configurations.
+- Updates to routes or Gateways are temporarily paused, but stable traffic delivery continues without degradation.
+- Recovery restores functionality, resynchronizing configuration updates seamlessly.
+
+#### Data Plane Resilience
+
+If a data plane pod encounters an outage or restarts:
+- Only routes tied to the specific linked Gateway object experience brief disruptions.
+- Configurations automatically resynchronize with the data plane upon pod restart, minimizing the scope of impact.
+- Other data plane pods remain unaffected and continue serving traffic normally.
---
## Pod readiness
-The `nginx-gateway` container exposes a readiness endpoint at `/readyz`. During startup, a [readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes) periodically checks this endpoint. The probe returns a `200 OK` response once the control plane initializes successfully and is ready to begin configuring NGINX. At that point, the pod is marked as ready.
+The control plane (`nginx-gateway`) and data plane (`nginx`) containers provide a readiness endpoint at `/readyz`. A [readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes) periodically checks this endpoint during startup. The probe reports `200 OK` when:
+- The control plane is ready to configure the NGINX data planes.
+- The data plane is ready to handle traffic.
+
+This marks the pods ready ensuring traffic is routed to healthy pods, ensuring reliable startup and smooth operations.
From 4c79c37a9a438c03b0cb18545ad1f1b18e2a0a92 Mon Sep 17 00:00:00 2001
From: Saloni Choudhary <146118978+salonichf5@users.noreply.github.com>
Date: Thu, 14 Aug 2025 22:36:07 +0530
Subject: [PATCH 2/3] Apply suggestions from code review
Co-authored-by: Alan Dooley
Co-authored-by: Saylor Berman
---
content/ngf/overview/gateway-architecture.md | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/content/ngf/overview/gateway-architecture.md b/content/ngf/overview/gateway-architecture.md
index 0ecc42782..5ad71aee0 100644
--- a/content/ngf/overview/gateway-architecture.md
+++ b/content/ngf/overview/gateway-architecture.md
@@ -7,7 +7,7 @@ nd-product: NGF
nd-docs: DOCS-1413
---
-Learn about the architecture and design principles of NGINX Gateway Fabric -- a Kubernetes Gateway API implementation using NGINX as the data plane.
+Learn about the architecture and design principles of NGINX Gateway Fabric: a Kubernetes Gateway API implementation which uses NGINX as the data plane.
This document is intended for:
@@ -51,7 +51,7 @@ Users can have multiple gateways running side-by-side in the same cluster. This
---
-## High-Level Overview of NGINX Gateway Fabric in Action
+## High-level overview of NGINX Gateway Fabric in execution
This figure depicts an example of NGINX Gateway Fabric exposing three web applications within a Kubernetes cluster to clients on the internet:
@@ -162,7 +162,7 @@ The figure shows:
| **Applications** | - _Application A_: Deployed by Developer A (2 pods) and routed to by Gateway A.
- _Application B_: Deployed by Developer A (1 pod) and routed to by Gateway A.
- _Application C_: Deployed by Developer B (1 pod) and routed to by Gateway B. |
| **NGINX Pods** | - _NGINX Pod A_: Handles traffic from Gateway A:
• _NGINX Process A_: Routes to Application A and Application B.
• _NGINX Agent A_: Updates configuration via gRPC.
- _NGINX Pod B_: Handles traffic from Gateway B:
• _NGINX Process B_: Routes to Application C.
• _NGINX Agent B_: Updates configuration via gRPC. |
| **Traffic Flow** | - _Client A_:
1. Sends requests to `a.example.com` via Public Endpoint.
2. Routed to Application A by NGINX Process A.
- _Client B_:
1. Sends requests to `c.other-example.com` via Public Endpoint.
2. Routed to Application C by NGINX Process B. |
-| **Public Endpoint** | A shared entry point (TCP Load Balancer or NodePort) that exposes services externally and forwards client traffic into the cluster. |
+| **Public Endpoint** | A shared entry point (TCP Load Balancer or NodePort) that exposes the NGINX Service externally and forwards client traffic into the cluster. |
| **Kubernetes API** | Acts as the central hub for resource management:
- Fetches Gateway API resources.
- Updates NGINX configuration dynamically via the NGF Pod. |
{{% /bootstrap-table %}}
@@ -242,22 +242,22 @@ NGINX Gateway Fabric supports both NGINX Open Source and NGINX Plus. While the p
- Changes to upstream servers, such as application scaling (e.g., adding or removing pods in Kubernetes), can be applied using the [NGINX Plus API](http://nginx.org/en/docs/http/ngx_http_api_module.html).
- This reduces the frequency of configuration reloads, minimizing potential disruptions and improving system stability during updates.
-These features contribute to reduced downtime, improved performance during scaling events, and more fine-grained control over traffic management.
+These features enable reduced downtime, improved performance during scaling events, and more fine-grained control over traffic management.
---
-### Resilience and Fault Isolation
+### Resilience and fault isolation
This architecture separates the control plane and data plane, creating clear operational boundaries that improve resilience and fault isolation. It also enhances scalability, security, and reliability while reducing the risk of failures affecting both components.
-#### Control Plane Resilience
+#### Control plane resilience
In the event of a control plane failure or downtime:
- Existing data plane pods continue serving traffic using their last-valid cached configurations.
- Updates to routes or Gateways are temporarily paused, but stable traffic delivery continues without degradation.
- Recovery restores functionality, resynchronizing configuration updates seamlessly.
-#### Data Plane Resilience
+#### Data plane resilience
If a data plane pod encounters an outage or restarts:
- Only routes tied to the specific linked Gateway object experience brief disruptions.
From 8d84dfc6972cd166c37c7be39d7221c05d622536 Mon Sep 17 00:00:00 2001
From: salonichf5 <146118978+salonichf5@users.noreply.github.com>
Date: Thu, 14 Aug 2025 13:48:33 -0600
Subject: [PATCH 3/3] update diagrams based on reviews
---
content/ngf/overview/gateway-architecture.md | 165 ++++++++++---------
1 file changed, 88 insertions(+), 77 deletions(-)
diff --git a/content/ngf/overview/gateway-architecture.md b/content/ngf/overview/gateway-architecture.md
index 5ad71aee0..78123ea44 100644
--- a/content/ngf/overview/gateway-architecture.md
+++ b/content/ngf/overview/gateway-architecture.md
@@ -58,76 +58,78 @@ This figure depicts an example of NGINX Gateway Fabric exposing three web applic
```mermaid
graph LR
%% Nodes and Relationships
- subgraph KubernetesCluster[Kubernetes Cluster]
- subgraph ApplicationsNamespaceA[Namespace: applications]
- subgraph DataplaneComponentsA[Dataplane Components]
- GatewayA[Gateway A
Listener: *.example.com]
- subgraph NGINXPodA[NGINX Pod]
- subgraph NGINXContainerA[NGINX Container]
- NGINXProcessA(NGINX)
- NGINXAgentA(NGINX Agent)
+ subgraph KubernetesCluster["Kubernetes Cluster"]
+ subgraph NamespaceNGF["Namespace: nginx-gateway"]
+ NGFControlPlanePod["NGINX Gateway Fabric Control Plane Pod"]
+ NGFControlPlanePod --> KubernetesAPI["Kubernetes API"]
+ end
+ subgraph ApplicationsNamespaceA["Namespace: applications"]
+ subgraph DataplaneComponentsA["Dataplane Components"]
+ GatewayA["Gateway A
Listener: *.example.com"]
+ subgraph NGINXDataPlanePodA["NGINX Data Plane Pod"]
+ subgraph NGINXContainerA["NGINX Container"]
+ NGINXProcessA["NGINX Process"]
+ NGINXAgentA["NGINX Agent"]
end
end
end
- subgraph HTTPRouteAAndApplications[HTTPRoutes and Applications]
- HTTPRouteA[HTTPRoute A\nHost: a.example.com]
- HTTPRouteB[HTTPRoute B\nHost: b.example.com]
- ApplicationA[Application A
Pods: 2]
- ApplicationB[Application B
Pods: 1]
+ subgraph HTTPRouteAAndApplications["HTTPRoutes and Applications"]
+ HTTPRouteA["HTTPRoute A
Host: a.example.com"]
+ HTTPRouteB["HTTPRoute B
Host: b.example.com"]
+ ApplicationA["Application A
Pods: 2"]
+ ApplicationB["Application B
Pods: 1"]
end
end
- subgraph ApplicationsNamespaceB[Namespace: applications-2]
- subgraph DataplaneComponentsB[Dataplane Components]
- GatewayB[Gateway B
Listener: *.other-example.com]
- subgraph NGINXPodB[NGINX Pod]
- subgraph NGINXContainerB[NGINX Container]
- NGINXProcessB(NGINX)
- NGINXAgentB(NGINX Agent)
+ subgraph ApplicationsNamespaceB["Namespace: applications-2"]
+ subgraph DataplaneComponentsB["Dataplane Components"]
+ GatewayB["Gateway B
Listener: *.other-example.com"]
+ subgraph NGINXDataPlanePodB["NGINX Data Plane Pod"]
+ subgraph NGINXContainerB["NGINX Container"]
+ NGINXProcessB["NGINX Process"]
+ NGINXAgentB["NGINX Agent"]
end
end
end
- subgraph HTTPRouteBandApplications[HTTPRoutes and Applications]
- HTTPRouteC[HTTPRoute C\nHost: c.other-example.com]
- ApplicationC[Application C
Pods: 1]
+ subgraph HTTPRouteBandApplications["HTTPRoutes and Applications"]
+ HTTPRouteC["HTTPRoute C
Host: c.other-example.com"]
+ ApplicationC["Application C
Pods: 1"]
end
end
- KubernetesAPI[Kubernetes API]
end
- subgraph UsersAndClients[Users and Clients]
- UserOperator[Cluster Operator]
- UserDevA[Application Developer A]
- UserDevB[Application Developer B]
- ClientA[Client A]
- ClientB[Client B]
+ subgraph UsersAndClients["Users and Clients"]
+ UserOperator["Cluster Operator"]
+ UserDevA["Application Developer A"]
+ UserDevB["Application Developer B"]
+ ClientA["Client A"]
+ ClientB["Client B"]
end
- subgraph SharedInfrastructure[Public Endpoint]
- PublicEndpoint[TCP Load Balancer / NodePort]
+ subgraph SharedInfrastructure["Public Endpoint"]
+ PublicEndpoint["TCP Load Balancer / NodePort"]
end
%% Updated Traffic Flow
- ClientA == a.example.com ==> PublicEndpoint
- ClientB == c.other-example.com ==> PublicEndpoint
- PublicEndpoint ==> NGINXProcessA
- PublicEndpoint ==> NGINXProcessB
- NGINXProcessA ==> ApplicationA
- NGINXProcessA ==> ApplicationB
- NGINXProcessB ==> ApplicationC
+ ClientA-->|a.example.com|PublicEndpoint
+ ClientB-->|c.other-example.com|PublicEndpoint
+ PublicEndpoint==>NGINXProcessA
+ PublicEndpoint==>NGINXProcessB
+ NGINXProcessA==>ApplicationA
+ NGINXProcessA==>ApplicationB
+ NGINXProcessB==>ApplicationC
%% Kubernetes Configuration Flow
- HTTPRouteA --> GatewayA
- HTTPRouteB --> GatewayA
- HTTPRouteC --> GatewayB
- UserOperator --> KubernetesAPI
- NGFPod[NGF Pod] --> KubernetesAPI
- NGFPod --gRPC--> NGINXAgentA
- NGFPod --gRPC--> NGINXAgentB
- NGINXAgentA --> NGINXProcessA
- NGINXAgentB --> NGINXProcessB
- UserDevA --> KubernetesAPI
- UserDevB --> KubernetesAPI
+ HTTPRouteA-->GatewayA
+ HTTPRouteB-->GatewayA
+ HTTPRouteC-->GatewayB
+ UserOperator-->KubernetesAPI
+ NGFControlPlanePod--gRPC-->NGINXAgentA
+ NGFControlPlanePod--gRPC-->NGINXAgentB
+ NGINXAgentA-->NGINXProcessA
+ NGINXAgentB-->NGINXProcessB
+ UserDevA-->KubernetesAPI
+ UserDevB-->KubernetesAPI
%% Styling
style UserOperator fill:#66CDAA,stroke:#333,stroke-width:2px
style GatewayA fill:#66CDAA,stroke:#333,stroke-width:2px
style GatewayB fill:#66CDAA,stroke:#333,stroke-width:2px
- style NGFPod fill:#66CDAA,stroke:#333,stroke-width:2px
+ style NGFControlPlanePod fill:#66CDAA,stroke:#333,stroke-width:2px
style NGINXProcessA fill:#66CDAA,stroke:#333,stroke-width:2px
style NGINXProcessB fill:#66CDAA,stroke:#333,stroke-width:2px
style KubernetesAPI fill:#9370DB,stroke:#333,stroke-width:2px
@@ -154,18 +156,18 @@ The figure shows:
| **Category** | **Description** |
|-------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
-| **Namespaces** | - _Namespace: applications_: Contains Gateway A for `*.example.com`, handling Application A and Application B.
- _Namespace: applications-2_: Contains Gateway B for `*.other-example.com`, handling Application C. |
-| **Users** | - _Cluster Operator_: Sets up NGF and manages Gateway API resources by provisioning Gateways (A and B) and the NGF Pod.
- _Developers A & B_: Developers deploy their applications and create HTTPRoutes. |
+| **Namespaces** | - _Namespace: nginx-gateway_: Contains the NGINX Gateway Fabric Control Plane Pod, responsible for managing Gateway API configurations and provisioning NGINX Data Plane Pods.
- _Namespace: applications_: Contains Gateway A for `*.example.com`, handling Application A and Application B.
- _Namespace: applications-2_: Contains Gateway B for `*.other-example.com`, handling Application C. |
+| **Users** | - _Cluster Operator_: Sets up the NGINX Gateway Fabric Control Plane Pod and manages Gateway API resources by provisioning Gateways (A and B).
- _Developers A & B_: Developers deploy their applications and create HTTPRoutes associated with their Gateways. |
| **Clients** | - _Client A_: Interacts with Application A through `a.example.com`.
- _Client B_: Interacts with Application C through `c.other-example.com`. |
-| **NGF Pod** | The control plane component, deployed in `nginx-gateway`, communicates with the Kubernetes API to:
- Fetch Gateway API resources.
- Dynamically manage NGINX data plane deployments. |
-| **Gateways** | - _Gateway A_: Listens for requests under `*.example.com`. Routes:
• _HTTPRoute A_: Routes `a.example.com` to Application A.
• _HTTPRoute B_: Routes `b.example.com` to Application B.
- _Gateway B_: Listens for requests under `*.other-example.com`. Routes:
• _HTTPRoute C_: Routes `c.other-example.com` to Application C. |
-| **Applications** | - _Application A_: Deployed by Developer A (2 pods) and routed to by Gateway A.
- _Application B_: Deployed by Developer A (1 pod) and routed to by Gateway A.
- _Application C_: Deployed by Developer B (1 pod) and routed to by Gateway B. |
-| **NGINX Pods** | - _NGINX Pod A_: Handles traffic from Gateway A:
• _NGINX Process A_: Routes to Application A and Application B.
• _NGINX Agent A_: Updates configuration via gRPC.
- _NGINX Pod B_: Handles traffic from Gateway B:
• _NGINX Process B_: Routes to Application C.
• _NGINX Agent B_: Updates configuration via gRPC. |
-| **Traffic Flow** | - _Client A_:
1. Sends requests to `a.example.com` via Public Endpoint.
2. Routed to Application A by NGINX Process A.
- _Client B_:
1. Sends requests to `c.other-example.com` via Public Endpoint.
2. Routed to Application C by NGINX Process B. |
-| **Public Endpoint** | A shared entry point (TCP Load Balancer or NodePort) that exposes the NGINX Service externally and forwards client traffic into the cluster. |
-| **Kubernetes API** | Acts as the central hub for resource management:
- Fetches Gateway API resources.
- Updates NGINX configuration dynamically via the NGF Pod. |
+| **NGINX Gateway Fabric Control Plane Pod** | The control plane pod, deployed in the `nginx-gateway` namespace, communicates with the Kubernetes API to:
- Fetch Gateway API resources.
- Dynamically provision and configure NGINX Data Plane Pods.
- Deliver traffic routing and configuration updates to NGINX Agent instances over gRPC. |
+| **Gateways** | - _Gateway A_: Listens for requests under `*.example.com`. Routes:
• _HTTPRoute A_: Routes requests to `a.example.com` into Application A.
• _HTTPRoute B_: Routes requests to `b.example.com` into Application B.
- _Gateway B_: Listens for requests under `*.other-example.com`. Routes:
• _HTTPRoute C_: Routes requests to `c.other-example.com` into Application C. |
+| **Applications** | - _Application A_: Deployed by Developer A (2 pods), routed by Gateway A via HTTPRoute A.
- _Application B_: Deployed by Developer A (1 pod), routed by Gateway A via HTTPRoute B.
- _Application C_: Deployed by Developer B (1 pod), routed by Gateway B via HTTPRoute C. |
+| **NGINX Data Plane Pods** | - _NGINX Data Plane Pod A_: Handles traffic routed from Gateway A:
• _NGINX Process A_: Forwards requests to Application A and Application B as defined in Gateway A's HTTPRoutes.
• _NGINX Agent A_: Receives configuration updates from the NGINX Gateway Fabric Control Plane Pod via gRPC.
- _NGINX Data Plane Pod B_: Manages traffic routed from Gateway B:
• _NGINX Process B_: Forwards requests to Application C as defined in Gateway B’s HTTPRoute.
• _NGINX Agent B_: Receives configuration updates via gRPC from the NGINX Gateway Fabric Control Plane Pod. |
+| **Traffic Flow** | - _Client A_:
1. Sends requests to `a.example.com` via the Public Endpoint.
2. Requests are routed by Gateway A and processed by NGINX Process A.
3. Traffic is forwarded to Application A.
- _Client B_:
1. Sends requests to `c.other-example.com` via the Public Endpoint.
2. Requests are routed by Gateway B and processed by NGINX Process B.
3. Traffic is forwarded to Application C. |
+| **Public Endpoint** | A shared entry point (TCP Load Balancer or NodePort) that exposes the NGINX Data Plane externally to forward client traffic into the cluster. |
+| **Kubernetes API** | Acts as the central hub for resource management:
- Fetches Gateway API resources for Gateway A and Gateway B.
- Facilitates NGINX configuration updates via the NGINX Gateway Fabric Control Plane Pod. |
-{{% /bootstrap-table %}}
+{{< /bootstrap-table >}}
_Color Coding_ :
@@ -180,14 +182,20 @@ _Color Coding_ :
```mermaid
graph LR
%% Main Components
- KubernetesAPI[Kubernetes API]
+ subgraph nginx-gateway["Namespace: nginx-gateway"]
+ NGFPod[NGINX Gateway Fabric Control Plane Pod]
+ end
+
+ subgraph NGINXDataPlane["NGINX Data Plane Pod"]
+ NGINXAgent[NGINX Agent]
+ NGINXMaster[NGINX Master]
+ NGINXWorker[NGINX Worker]
+ ConfigFiles[Config Files]
+ end
+
F5Telemetry[F5 Telemetry Service]
PrometheusMonitor[Prometheus]
- NGFPod[NGF Pod]
- NGINXAgent[NGINX Agent]
- NGINXMaster[NGINX Master]
- NGINXWorker[NGINX Worker]
- ConfigFiles[Config Files]
+ KubernetesAPI[Kubernetes API]
Client[Client]
BackendApplication[Backend Application]
@@ -220,20 +228,23 @@ graph LR
The following table describes the connections, preceeded by their types in parentheses. For brevity, the suffix "process" has been omitted from the process descriptions.
{{< bootstrap-table "table table-bordered table-striped table-responsive" >}}
+
| # | Component/Protocol | Description |
| ---| ----------------------- | ------------------------------------------------------------------------------------------------------------ |
-| 1 | Kubernetes API (HTTPS) | _Kubernetes API → NGF Pod_: The NGF Pod watches the Kubernetes API for Gateway API resources (e.g., Gateways, HTTPRoutes) to fetch the latest cluster configuration. |
-| 2 | gRPC | _NGF Pod → NGINX Agent_: The NGF Pod processes the Gateway API resources, generates NGINX configuration metadata, and sends it securely to the NGINX Agent over gRPC. |
-| 3 | File I/O | _NGINX Agent → Config Files_: The NGINX Agent validates the configuration and writes the configuration files. |
-| 4 | Signal | _NGINX Agent → NGINX Master_: The NGINX Agent signals the NGINX Master process to reload the updated configuration. This ensures the updated routes and settings are live. |
-| 5 | HTTP/HTTPS | _Prometheus → NGINX Worker_: Prometheus collects runtime metrics (e.g., traffic stats, request details) directly from the NGINX Worker via the HTTP endpoint (`:9113/metrics`). This data helps monitor the operational state of the data plane. |
-| 6 | HTTPS | _NGF Pod → F5 Telemetry Service_: The NGF Pod sends system and usage telemetry data (like API requests handled, error rates, and performance metrics) to the F5 Telemetry Service for analytics. |
-| 7 | HTTP, HTTPS | _Client → NGINX Worker_: Clients send traffic (e.g., HTTP/HTTPS requests) to the NGINX Worker. These are typically routed via a public LoadBalancer or NodePort to expose NGINX. |
-{{% /bootstrap-table %}}
+| 1 | Kubernetes API (HTTPS) | _Kubernetes API → NGINX Gateway Fabric Control Plane Pod_: The NGINX Gateway Fabric Control Plane Pod (in the `nginx-gateway` namespace) watches the Kubernetes API for updates to Gateway API resources (e.g., Gateways, HTTPRoutes), fetching the latest configuration to manage routing and traffic control. |
+| 2 | gRPC | _NGINX Gateway Fabric Control Plane Pod → NGINX Agent_: The NGINX Gateway Fabric Control Plane Pod processes Gateway API resources, generates NGINX configuration settings, and securely delivers them to the NGINX Agent inside the NGINX Data Plane Pod via gRPC. |
+| 3 | File I/O | _NGINX Agent → Config Files_: The NGINX Agent (within the NGINX Data Plane Pod) validates the configuration metadata received from the Control Plane Pod and writes it to NGINX configuration files within the pod. These files store dynamic routing rules and traffic settings. |
+| 4 | Signal | _NGINX Agent → NGINX Master_: After writing the configuration files, the NGINX Agent signals the NGINX Master process to reload the configuration. This ensures the NGINX Data Plane Pod immediately applies the updated routes and settings. |
+| 5 | HTTP/HTTPS | _Prometheus → NGINX Worker_: Prometheus collects runtime metrics (e.g., traffic statistics, request rates, and active connections) from the NGINX Worker process, which is part of the NGINX Data Plane Pod. The `/metrics` endpoint exposes these metrics for monitoring and observability. |
+| 6 | HTTPS | _NGINX Gateway Fabric Control Plane Pod → F5 Telemetry Service_: The NGINX Gateway Fabric Control Plane Pod sends telemetry data (e.g., API requests handled, usage metrics, performance stats, error rates) to the external F5 Telemetry Service for centralized monitoring and diagnostics. |
+| 7 | HTTP/HTTPS | _Client → NGINX Worker_: Clients send incoming application traffic (e.g., HTTP/HTTPS requests) to the NGINX Worker process within the NGINX Data Plane Pod. These requests are typically routed through a shared Public Endpoint (e.g., LoadBalancer or NodePort) before reaching the NGINX Data Plane. |
+| 8 | HTTP/HTTPS | _NGINX Worker → Backend Application_: The NGINX Worker process forwards client traffic to the appropriate backend application services (e.g., Pods) as defined in the routing rules and configuration received from the Control Plane Pod. |
+
+{{< /bootstrap-table >}}
---
-### NGINX Plus Features and Benefits
+### Additional features and enhancements when using NGINX Plus
NGINX Gateway Fabric supports both NGINX Open Source and NGINX Plus. While the previous diagram shows NGINX Open Source, using NGINX Plus provides additional capabilities, including:
@@ -248,7 +259,7 @@ These features enable reduced downtime, improved performance during scaling even
### Resilience and fault isolation
-This architecture separates the control plane and data plane, creating clear operational boundaries that improve resilience and fault isolation. It also enhances scalability, security, and reliability while reducing the risk of failures affecting both components.
+This architecture separates the control plane and data plane, creating clear operational boundaries that improve resilience and fault isolation:
#### Control plane resilience