Skip to content

Commit 025517b

Browse files
authored
📝 Add Kubernetes warning, when to use this image (#122)
1 parent b4e581b commit 025517b

File tree

1 file changed

+79
-1
lines changed

1 file changed

+79
-1
lines changed

README.md

Lines changed: 79 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,13 +58,91 @@ If you need to use an older WSGI-based framework like Flask or Django (instead o
5858

5959
**Docker Hub image**: [https://hub.docker.com/r/tiangolo/uwsgi-nginx/](https://hub.docker.com/r/tiangolo/uwsgi-nginx/)
6060

61+
## 🚨 WARNING: You Probably Don't Need this Docker Image
62+
63+
You are probably using **Kubernetes** or similar tools. In that case, you probably **don't need this image** (or any other **similar base image**). You are probably better off **building a Docker image from scratch**.
64+
65+
---
66+
67+
If you have a cluster of machines with **Kubernetes**, Docker Swarm Mode, Nomad, or other similar complex system to manage distributed containers on multiple machines, then you will probably want to **handle replication** at the **cluster level** instead of using a **process manager** in each container that starts multiple **worker processes**, which is what this Docker image does.
68+
69+
In those cases (e.g. using Kubernetes) you would probably want to build a **Docker image from scratch**, installing your dependencies, and running **a single process** instead of this image.
70+
71+
For example, using [Gunicorn](https://gunicorn.org/) you could have a file `app/gunicorn_conf.py` with:
72+
73+
```Python
74+
# Gunicorn config variables
75+
loglevel = "info"
76+
errorlog = "-" # stderr
77+
accesslog = "-" # stdout
78+
worker_tmp_dir = "/dev/shm"
79+
graceful_timeout = 120
80+
timeout = 120
81+
keepalive = 5
82+
threads = 3
83+
```
84+
85+
And then you could have a `Dockerfile` with:
86+
87+
```Dockerfile
88+
FROM python:3.9
89+
90+
WORKDIR /code
91+
92+
COPY ./requirements.txt /code/requirements.txt
93+
94+
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
95+
96+
COPY ./app /code/app
97+
98+
CMD ["gunicorn", "--conf", "app/gunicorn_conf.py", "--bind", "0.0.0.0:80", "app.main:app"]
99+
```
100+
101+
You can read more about these ideas in the [FastAPI documentation about: FastAPI in Containers - Docker](https://fastapi.tiangolo.com/deployment/docker/#replication-number-of-processes) as the same ideas would apply to other web applications in containers.
102+
103+
## When to Use this Docker Image
104+
105+
### A Simple App
106+
107+
You could want a process manager running multiple worker processes in the container if your application is **simple enough** that you don't need (at least not yet) to fine-tune the number of processes too much, and you can just use an automated default, and you are running it on a **single server**, not a cluster.
108+
109+
### Docker Compose
110+
111+
You could be deploying to a **single server** (not a cluster) with **Docker Compose**, so you wouldn't have an easy way to manage replication of containers (with Docker Compose) while preserving the shared network and **load balancing**.
112+
113+
Then you could want to have **a single container** with a **process manager** starting **several worker processes** inside, as this Docker image does.
114+
115+
### Prometheus and Other Reasons
116+
117+
You could also have **other reasons** that would make it easier to have a **single container** with **multiple processes** instead of having **multiple containers** with **a single process** in each of them.
118+
119+
For example (depending on your setup) you could have some tool like a Prometheus exporter in the same container that should have access to **each of the requests** that come.
120+
121+
In this case, if you had **multiple containers**, by default, when Prometheus came to **read the metrics**, it would get the ones for **a single container each time** (for the container that handled that particular request), instead of getting the **accumulated metrics** for all the replicated containers.
122+
123+
Then, in that case, it could be simpler to have **one container** with **multiple processes**, and a local tool (e.g. a Prometheus exporter) on the same container collecting Prometheus metrics for all the internal processes and exposing those metrics on that single container.
124+
125+
---
126+
127+
Read more about it all in the [FastAPI documentation about: FastAPI in Containers - Docker](https://fastapi.tiangolo.com/deployment/docker/), as the same concepts apply to other web applications in containers.
128+
61129
## How to use
62130

63-
* You shouldn't have to clone the GitHub repo. You should use it as a base image for other images, using this in your `Dockerfile`:
131+
You don't have to clone this repo.
132+
133+
You can use this image as a base image for other images.
134+
135+
Assuming you have a file `requirements.txt`, you could have a `Dockerfile` like this:
64136

65137
```Dockerfile
66138
FROM tiangolo/uwsgi-nginx:python3.9
67139

140+
COPY ./requirements.txt /app/requirements.txt
141+
142+
RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
143+
144+
COPY ./app /app
145+
68146
# Your Dockerfile code...
69147
```
70148

0 commit comments

Comments
 (0)