KomITi Academy

Infrastructure

Terraform, Docker & AWS — delivery mental model for the entire KomITi system

This is a foundation document for candidates and every new KomITi engineer. It teaches the infrastructure layer from scratch — not through abstract toy examples, but through the real KomITi AWS/Terraform/Docker context.

The purpose is not to turn you into a cloud/platform specialist in 4 hours, but to give you an operational mental model:

By the end of this document you will understand the full infra stack, and in the hands-on lab (section 6) you will bring up the Odoo runtime that serves as the foundation for all future tutorials.

Table of Contents

  1. What is the infra stack in KomITi
  2. AWS fundamentals you need to know
  3. Docker and container fundamentals you need to know
    1. Docker file map in odoo4komiti
    2. docker-compose.yml
    3. Dockerfile.odoo
    4. .env and config/*.conf
  4. Terraform
    1. Terraform mental model
    2. What are provider, resource, data source, and output
    3. Directory structure: general and KomITi-specific
    4. Terraform files in [modules/]
    5. Terraform files in [root stack/]
    6. Dependency reasoning
    7. Summarizing what Terraform code typically means in this repo
    8. How AWS, Docker, and Terraform connect into a single flow
    9. How to turn Terraform files into action and materialize artifacts
    10. Minimal safe workflow in KomITi
  5. Terraform vs Docker Compose: same information, different place of record
  6. Lab: bring up the Odoo runtime for development
    1. Option A — with AWS/Azure lab budget (Terraform)
    2. Option B — without AWS/Azure lab budget (Docker Compose)
  7. What to read next
  8. Task on the komiti_academy project for candidates

1) What is the infra stack in KomITi

When we say “infra” in this repo, we don’t mean a single tool, but multiple layers working together:

The professional thinking here is:

2) AWS fundamentals you need to know

AWS (Amazon Web Services) is a cloud platform — a collection of on-demand computing services (servers, storage, networking, databases, and more) hosted in Amazon's data centres worldwide. Instead of buying and maintaining physical hardware, you rent exactly the resources you need and pay only for what you use.

In this learning context, you don’t study AWS as a catalog of 200 services, but as a minimal operational set which is:

3) Docker and container fundamentals you need to know

Docker is a platform for building, shipping, and running applications inside containers. Solomon Hykes created it in 2013 at dotCloud to solve the "works on my machine" problem by packaging an application with everything it needs to run.

A container is an isolated runtime process with its own filesystem view, network namespace, and agreed-upon entrypoint — not a virtual machine. Containers build on Linux kernel features (cgroups and namespaces) that date back to 2006–2008, but Docker made them practical and accessible.

You need to know these concepts:

In the KomITi stack you will encounter these patterns early:

3.1) Docker file map in odoo4komiti

Once you understand the Docker concepts, the next question is: where do those decisions live in the odoo4komiti repo?

[odoo4komiti/] | +-- docker-compose.yml -> service topology: odoo-web + db + volumes + ports +-- Dockerfile.odoo -> custom Odoo image build +-- .env -> local-only environment variables (ignored by git) +-- [config/] | +-- odoo.conf -> shared/default Odoo config +-- odoo.local.conf -> local override config +-- odoo.local.conf.example -> example local config

This is the Docker equivalent of the Terraform file map in section 4: you do not read the repo as a random folder dump; you read it by responsibility.

3.2) docker-compose.yml

docker-compose.yml is the entry point of the local Docker stack. Docker Compose was introduced by Docker in 2014 so teams could define a multi-container application in one YAML file.

In odoo4komiti, this file answers the practical runtime questions:

services:
  odoo-web:
    build:
      context: .                    # build context = repo root
      dockerfile: Dockerfile.odoo   # custom Dockerfile (adds boto3 to stock Odoo image)
    image: odoo:19.0-boto3          # tag for the built image
  db:
    image: postgres:16              # official PostgreSQL 16 image from Docker Hub
odoo-web:
    ports:
        - "8069:8069"               # host:container — access Odoo at localhost:8069
db:
    ports:
        - "5432:5432"               # host:container — expose PostgreSQL to host (dev convenience)
odoo-web:
    volumes:
        - odoo-data:/var/lib/odoo                                        # filestore, sessions (named volume)
        - ./config/${ODOO_CONF_FILE:-odoo.conf}:/etc/odoo/odoo.conf:ro   # Odoo config file (read-only bind mount)
        - ./custom-addons:/mnt/extra-addons:rw                           # your custom modules (bind mount)
        - ./third-party-addons:/mnt/third-party-addons:rw                # OCA / community modules (bind mount)
db:
    volumes:
        - pg-data:/var/lib/postgresql/data                               # database files (named volume)
odoo-web:
    environment:
        AWS_ACCESS_KEY_ID:               # passed from host; used for S3 backups (not needed locally)
        AWS_SECRET_ACCESS_KEY:           # passed from host; used for S3 backups (not needed locally)
        AWS_DEFAULT_REGION:              # passed from host; used for S3 backups (not needed locally)
        HOST: db                         # network alias of the postgres service
        PORT: 5432                       # default PostgreSQL port
        USER: odoo                       # must match POSTGRES_USER in db service
        PASSWORD: odoo123                # must match POSTGRES_PASSWORD in db service
db:
    environment:
        POSTGRES_DB: postgres            # bootstrap DB for server start; will be unused after admin creates its own
        POSTGRES_USER: odoo              # PostgreSQL superuser; Odoo connects as this user for ALL database operations
        POSTGRES_PASSWORD: odoo123       # must match PASSWORD in the odoo-web service
Tip: the username odoo is a community convention used in every official Odoo Docker example — changing it does not add real security (that is security through obscurity). What matters is a strong password. In production, replace odoo123 with a long random string; the username can stay odoo.
┌─────────────────────────────────────────────┐
│  odoo service                               │
│                                             │
│  environment:                               │
│    HOST: db ─────────────────────────┐      │
│    PORT: 5432 ───────────────────┐   │      │
│    USER: odoo ───────────────┐   │   │      │
│    PASSWORD:                 │   │   │      │
│      komiti-library-dev-local│   │   │      │
│                              │   │   │      │
└──────────────────────────────┼───┼───┼──────┘
                               │   │   │
              must match ══════╪═══╪═══╪══════
                               │   │   │
┌──────────────────────────────┼───┼───┼──────┐
│  postgres service            │   │   │      │
│                              ▼   ▼   ▼      │
│  environment:                               │
│    POSTGRES_USER: odoo ◄─────┘   │   │      │
│    POSTGRES_PASSWORD:            │   │      │
│      komiti-library-dev-local ◄──┘   │      │
│    POSTGRES_DB: postgres             │      │
│                                      │      │
│  listens on port 5432 ◄─────────────┘      │
│                                      │      │
│  networks:                           │      │
│    library-dev:                      │      │
│      aliases:                        │      │
│        - db ◄────────────────────────┘      │
│                                             │
└─────────────────────────────────────────────┘
odoo-web:
    depends_on:
        db:
            condition: service_healthy   # Odoo waits until pg_isready passes before starting

If you want to understand why Odoo appears on localhost:8069, why Postgres is on 5432, why the filestore survives a container restart, or why Odoo waits for the database healthcheck, docker-compose.yml is the first file you should read.

A junior mistake is to leave too much at the default value. If you want to increase security, the first things that must change are:

3.3) Dockerfile.odoo

Dockerfile.odoo defines the custom image for the Odoo service. A Dockerfile was introduced with Docker itself in 2013 so an image build could be described as code instead of manual shell steps.

In this repo, Dockerfile.odoo starts from the official odoo:19.0 base image, switches to root, installs python3-boto3, and then switches back to the odoo user. That tells you something important about responsibility:

If a Python package is missing inside the Odoo container, that is usually a Dockerfile.odoo question, not a Compose question.

3.4) .env and config/*.conf

.env provides local environment variables for Docker Compose. In this repo it selects which Odoo config file to mount and can also provide AWS-related variables needed by the local runtime. Because these values can be sensitive and machine-specific, .env is local-only and must not be treated like normal committed source code.

The config/ directory holds the Odoo application configuration itself:

The short mental model is: .env decides which config and variables the container receives; config/*.conf decides how Odoo behaves after it starts.

4) Terraform

Tip: if you are a candidate without a budget for AWS services, you can skip this chapter for now and come back to it later.

4.1) Terraform mental model

Terraform is an infrastructure-as-code (IaC) tool — you describe the infrastructure you want in code, and the tool builds it for you. HashiCorp released it in 2014 to give teams a single declarative language for provisioning resources across any cloud provider.

The core loop:

4.2) What are provider, resource, data source, and output

A provider is a plugin that enables Terraform to communicate with a given system. In this repo, the most important one is the AWS provider — Terraform by itself doesn’t know what EC2, VPC, or Elastic IP is; the AWS provider gives it that vocabulary and API bridge.

resource describes something that Terraform creates or modifies, such as an EC2 instance, security group, elastic IP, route table etc.

data reads something that already exists without creating or modifying it. For example, you might use a data block to look up the latest Ubuntu AMI ID so your EC2 resource can reference it. The key difference:

output exposes a value to your terminal so you can use it after terraform apply finishes. Without outputs, Terraform creates the infrastructure but you have to dig through the AWS console to find what you need. In the KomITi stack, typical outputs are the server’s public IP, the SSH command to connect, and the HTTP/HTTPS URL — everything you need to start working with the environment immediately.

4.3) Directory structure: general and KomITi-specific

When you first read a Terraform repo, it’s not enough to know what a resource is; you also need to know where things live.

The KomITi-specific layout looks roughly like this:

[infra/] | +-- [aws/] | +-- [modules/] -> reusable Terraform modules | | | +-- [odoo_ec2_compose/] -> shared module that root stacks reuse | | | +-- main.tf -> module internal resources and wiring | +-- variables.tf -> inputs the module receives from root stack | +-- outputs.tf -> module outputs +-- [odoo-dev-ec2-compose/] is [root stack/] for dev | | | +-- main.tf / versions.tf -> root Terraform setup | +-- network.tf / security.tf / compute.tf / locals.tf | +-- variables.tf / outputs.tf -> root inputs/outputs for dev stack | +-- templates/ / scripts/ -> templates and helper operational scripts | +-- README.md / RUNBOOK.md -> operator documentation | +-- terraform.tfvars / terraform.tfstate -> local runtime/state files, not for commit | +-- .terraform/ / .terraform.lock.hcl -> local provider/cache and lock after init | +-- [odoo-prod-ec2-compose/] is [root stack/] for prod | +-- main.tf / versions.tf +-- network.tf / security.tf / compute.tf / locals.tf +-- variables.tf / outputs.tf +-- templates/ / scripts/ +-- README.md / RUNBOOK.md +-- terraform.tfvars / terraform.tfstate +-- .terraform/ / .terraform.lock.hcl

A short rule for reading this layout: modules/odoo_ec2_compose/main.tf and modules/odoo_ec2_compose/variables.tf belong to the reusable module, while odoo-dev-ec2-compose/main.tf and odoo-dev-ec2-compose/variables.tf belong to the root stack that calls that module. In other words, they are not duplicates serving the same role: the module has its own internal Terraform API, and the root stack has its own environment-level API.

Practically, when we say “Terraform code for DEV”, in this repo that most often means: open infra/aws/odoo-dev-ec2-compose/ and read that directory as a single infrastructure system.

Keep in mind that the name odoo-dev-ec2-compose already carries 3 layers within it:

4.4) Terraform files in [modules/]

Once you see that odoo_ec2_compose/ exists in modules/, it’s important not to think of it as just a helper folder. It is a reusable Terraform building block that root stacks call.

There are 3 key files in that module:

4.4.1) main.tf

main.tf in the module describes the internal logic of the reusable block: which AWS resources the module creates and how they are connected. In our KomITi example, the module builds multiple layers at once (VPC, subnet, route table, security group, etc.) so the root stack doesn’t have to write them from scratch every time. If you want to understand what odoo_ec2_compose actually does, the module main.tf is the first place to read.

4.4.2) variables.tf

variables.tf in the module defines the input API of that reusable block. Here the module says: if you want to use me, you must provide or may provide values such as name_prefix, env, allowed_ssh_cidr, instance_type, ssh_public_key, and so on. In other words: the root stack calls the module via its own main.tf, and the module’s variables.tf determines what that call is allowed and required to pass.

4.4.3) outputs.tf

outputs.tf in the module defines what the module returns back to the root stack. In our case, these are important operational values such as public IP, elastic IP, or backup bucket name. This matters because the root stack often doesn’t just want to “fire up” the module, but also to take some of its results and show them to the operator or pass them further.

4.5) Terraform files in [root stack/]

Once you understand the directory structure, it makes sense to also read what the key Terraform files inside that stack do.

4.5.1) Terraform State terraform.tfstate

terraform.tfstate is the critical artifact that contains the state of all resources.

State remembers:

Key rules:

4.5.2) variables.tf, terraform.tfvars.example, and terraform.tfvars

Variables are the input parameters of a Terraform configuration.

They exist so that code doesn’t hardcode:

There are 3 distinct files to understand here:

variables.tf defines which input parameters exist, their types, and what is optional or has a default. It is the schema the stack expects (e.g. instance_type, domain_name, allowed_ssh_cidr).

terraform.tfvars contains the actual local values used during plan/apply (terraform.tfvars.example is a template for it). Important: DEV and PROD intentionally don’t use the same secret model. For PROD, terraform.tfvars must not contain production secrets — those live on the PROD EC2 server itself, not in Terraform git artifacts.

A Terraform beginner assumes it’s normal to put every password in tfvars. Think about whether that value ends up in state, who has access to that state, and whether the secret belongs to the Terraform layer or the bootstrap/runtime layer.

Key discipline:

4.5.3) main.tf

main.tf is the entry point of a root stack: which modules are used, how they are called, and with which values. Both this file and the module main.tf from 4.4.1) are called main.tf, but they operate at different levels: the module one is internal implementation, this one is environment-level orchestration.

4.5.4) network.tf

network.tf is the file where the network topology logic most often lives: VPC, subnets, route tables, internet gateway, and similar. If you want to understand where EC2 actually lives and how it even reaches the internet, this is one of the first files you should read.

4.5.5) outputs.tf

outputs.tf is the file that declares which important values Terraform exposes to the operator after apply. These are often public IP, SSH command, URL, or some other identifier you need immediately for the next operational step.

That’s why outputs.tf is important: it’s the bridge between infra code and operator work. Without it, Terraform can successfully create resources while the person doing the deploy still can’t quickly see what their most important next entry point into the system is.

4.6) Dependency reasoning

One of the most important Terraform concepts is the dependency graph.

Terraform must know:

Practically:

On the local komiti_academy Docker Desktop lab, that dependency graph looks roughly like this. If you have Graphviz dot installed, you can generate it from the terminal:

terraform graph | dot -Tpng > graph.png

Here is the same relationship as an ASCII diagram:

Dependency graph (arrows read "depends on") docker_container.odoo ──────┬──▶ docker_container.postgres │ │ │ ├──▶ docker_image.postgres │ ├──▶ docker_network.odoo │ └──▶ docker_volume.postgres_data │ ├──▶ docker_image.odoo └──▶ docker_volume.odoo_data

Read the arrows as “depends on”: docker_container.odoo depends on docker_container.postgres because Odoo cannot start without a running database. Postgres in turn depends on its image, a shared network, and a persistent volume for data. Odoo also needs its own image and a volume for the filestore. Terraform reads this graph and creates resources in the right order — volumes and images first, then postgres, then odoo.

A common junior mistake is to read Terraform file by file. The dependency graph is a better starting point — it shows you the actual creation order and which resources are connected.

4.7) Summarizing what Terraform code typically means in this repo

When reading AWS Terraform directories, think like this:

This is the infrastructure equivalent of the Odoo mental model:

4.8) How AWS, Docker, and Terraform connect into a single flow

The most useful foundations mental model for this repo is the following sequence:

  1. Terraform defines AWS resources.
  2. AWS provides the host, network, security boundary, and public endpoint.
  3. On that host, Docker/Compose runs the application services.
  4. Only then do you measure whether the Odoo runtime is actually healthy.

That’s why it’s important not to mix problem classes:

4.9) How to turn Terraform files into action and materialize artifacts (Docker containers, AWS resources, Odoo)

4.9.1) Plan is not a formality. terraform plan is a terminal command, and it is not a checkbox before apply.

Its purpose is to clearly show you:

Professional thinking means:

If you can’t read a Terraform plan, then you are not yet operationally safe for infrastructure work.

4.9.2) Apply is not a deploy script. The terminal command terraform apply does not mean “I launched the server and I’m done”.

Apply means:

But that is not the same as:

In the KomITi AWS context, the flow often goes like this:

  1. Terraform brings up the infrastructure skeleton.
  2. The bootstrap/compose/runtime layer brings the application to an operational state.
  3. Then verification follows.

This is the same mental discipline as with Odoo: infra code truth is not the same as runtime truth.

One more important command: terraform destroy tears down all resources Terraform manages. In a lab, it is how you cleanly remove everything. In production, it is irreversible and must never be run without explicit intent and backup.

4.10) Minimal safe workflow in KomITi

When making a Terraform change, the minimum safe sequence is:

  1. understand what you’re changing at the resource level,
  2. verify variables and environment context,
  3. run terraform init if needed,
  4. run terraform validate,
  5. run terraform plan,
  6. read the impact,
  7. only then run terraform apply,
  8. verify outputs and runtime,
  9. document the operational delta if significant.

This is not bureaucracy; it is basic production discipline.

5) Terraform vs Docker Compose: same information, different place of record

Terraform files and Docker Compose complement each other, since you use each for what it does best. This is the case in odoo4komiti, which is a real production system. But in this komiti_academy lab — which is much simpler than odoo4komiti — you should view these two approaches as mutually exclusive variants (you use either Terraform or Docker Compose, not both simultaneously for the same runtime). Below is a mapping of the local komiti_academy lab so you can clearly see where the same runtime information is recorded in the Terraform variant versus the Compose variant.

Information Terraform Compose
Local academy runtime name locals.tfname_prefix docker-compose.ymlname: komiti-academy-dev
Odoo image default: variables.tfodoo_image = odoo:19.0
actual value: terraform.tfvarsodoo:19.0
where applied: compute.tfdocker_image.odoo
docker-compose.ymlservices.odoo.image
Postgres image default: variables.tfpostgres_image = postgres:16
actual value: terraform.tfvarspostgres:16
where applied: compute.tfdocker_image.postgres
docker-compose.ymlservices.postgres.image
Host port for Odoo default: variables.tfodoo_port = 8067
actual value: terraform.tfvarsodoo_port = 8067
where applied: compute.tfports.external = var.odoo_port
docker-compose.ymlservices.odoo.ports
Mapping 8067 → 8069 compute.tfports { external = var.odoo_port, internal = 8069 } docker-compose.yml"${ODOO_PORT:-8067}:8069"
Initial Postgres DB default: variables.tfpostgres_db = postgres
actual value: terraform.tfvarspostgres
where applied: compute.tfPOSTGRES_DB = ${var.postgres_db}
docker-compose.ymlPOSTGRES_DB
Postgres user default: variables.tfpostgres_user = odoo
actual value: terraform.tfvarsadmin.komiti_odoo
where applied: compute.tfPOSTGRES_USER and Odoo USER
docker-compose.ymlPOSTGRES_USER and Odoo USER
Postgres password default: variables.tfpostgres_password = no default value
actual value: terraform.tfvarskomiti-academy-local-dev
where applied: compute.tfPOSTGRES_PASSWORD and Odoo PASSWORD
docker-compose.ymlPOSTGRES_PASSWORD and Odoo PASSWORD
Odoo connects to Postgres on host db compute.tf – Odoo env HOST=db and Postgres network alias db docker-compose.yml – Odoo HOST: db and network alias db
Addons bind mount locals.tfaddons_host_path
compute.tf – used in volume mount to /mnt/extra-addons
docker-compose.yml../../../custom-addons:/mnt/extra-addons:rw
Odoo data volume compute.tfdocker_volume.odoo_data docker-compose.ymlodoo-data:/var/lib/odoo
Postgres data volume compute.tfdocker_volume.postgres_data docker-compose.ymlpostgres-data:/var/lib/postgresql/data
Network network.tfdocker_network.odoo docker-compose.ymlnetworks.academy
Odoo dependency on Postgres compute.tfdepends_on = [docker_container.postgres] docker-compose.ymldepends_on.postgres.condition: service_healthy
URL output outputs.tfodoo_url No output block; the practical equivalent is ports and the command docker compose ps

Especially important notes:

Topic Terraform Compose What this means
Master Password for localhost Odoo Not explicitly defined Not explicitly defined None of your academy files currently set admin_passwd, so the password you see in the browser does not come directly from this Terraform/Compose wiring.
Postgres healthcheck None Present The Compose variant is a bit richer here, because it has an explicit healthcheck and waits for Postgres to be healthy before starting Odoo.
Docker Desktop UI grouping No Compose metadata Has Compose metadata That’s why Compose is grouped in the Docker Desktop UI, while Terraform Docker provider resources look more like individual Docker objects.
State terraform.tfstate No single state file Terraform remembers the desired and actual state in its state, while Compose relies more on the current Docker engine state and Compose project metadata labels.

6) Lab: bring up the Odoo runtime for development

In this lab the candidate will bring up the runtime on which you will — from zero to hero — develop the Odoo module. This lab is not part of the core product scope of the komiti_academy module, but a learning/support exercise so you can locally run and verify everything you build in the later tutorials.

During this lab you will bring up two separate Odoo instances on your machine:

For simplicity we skip the staging branch in this lab. Your workflow will be feature → main (with a PR in between). Both instances share the same custom-addons/ folder on disk, but their databases are completely independent — so the prod database won’t have your half-finished module installed, even though the files are visible on the filesystem.

There are two options depending on your available resources:

Option A — for candidates WITH an AWS/Azure lab budget

If you have access to a cloud account where you can provision resources, this option lets you practice Terraform on real infrastructure. The purpose is not just to get a running Odoo, but for you to practically exercise:

Even without real cloud, you can run the same Terraform exercise locally via the Docker provider. In that case, this lab does not teach real cloud networking, VM lifecycle, or cloud security boundaries. View it as a bridge: patterns around variables, outputs, naming, and environment separation transfer directly to AWS/Azure Terraform work later, but the local variant is not a 1:1 cloud template.

The suggested lab should live in your repo komiti_library, for example like this:

Within each of those two root directories, the candidate should have at least:

The minimal runtime that Terraform should bring up is:

A good minimum viable lab is:

  1. Terraform Docker provider locally manages the Docker Desktop runtime.
  2. main.tf creates the network, volume, and both containers.
  3. variables.tf carries the environment name, port, DB credentials for the lab, and naming prefix.
  4. outputs.tf outputs a URL like http://localhost:8067 and key resource names.
  5. terraform.tfvars.example exists for both dev and prod, so the candidate sees the separation even when both stacks live locally.

The bootstrap sequence in this lab means:

The minimum safe lab flow follows the same sequence as section 4.10, ending with terraform destroy -var-file=terraform.tfvars to cleanly tear down the lab. If you successfully complete this path, you have truly practiced the Terraform shape and lifecycle discipline.

Option B — for candidates WITHOUT an AWS/Azure lab budget

Don’t worry — you can still do everything needed to continue learning. In this option you skip Terraform entirely and use Docker Compose to bring up Odoo directly on Docker Desktop. The result is the same two running Odoo instances (prod + dev) you need for the module development tutorials; the only difference is that Terraform infrastructure management is not part of your exercise.

Prerequisites:

Step 1 — open a terminal and make sure you are on main.

PS C:\Users\you> cd C:\dev\komiti_library
PS C:\dev\komiti_library> git checkout main
PS C:\dev\komiti_library> git pull origin main

You start on main because infrastructure files belong to all branches — they are not feature-specific. By committing them to main first, every future feature branch will inherit them automatically.

Step 2 — create the directory structure and compose files.

You need two directories — one for each environment. You also need a custom-addons/ folder at the repo root (this is where your Odoo modules will live later). Create them:

PS C:\dev\komiti_library> mkdir infra/local/odoo-prod-docker-desktop
PS C:\dev\komiti_library> mkdir infra/local/odoo-dev-docker-desktop
PS C:\dev\komiti_library> mkdir custom-addons
PS C:\dev\komiti_library> echo $null > custom-addons/.gitkeep
Note: echo $null > custom-addons/.gitkeep creates an empty file called .gitkeep. Git does not track empty directories, so without at least one file inside custom-addons/ the folder would not appear in the repository. The name .gitkeep is a widely used convention — Git itself does not treat it specially.

Now create the prod compose file at infra/local/odoo-prod-docker-desktop/docker-compose.yml:

name: komiti-library-prod                                  # Docker Compose project name (prefixes container names)

services:
  postgres:
    image: postgres:16                                     # official PostgreSQL 16 image
    restart: unless-stopped                                # auto-restart unless you explicitly stop it
    environment:
      POSTGRES_DB: postgres                                # bootstrap DB for server start; unused later
      POSTGRES_USER: odoo                                  # PostgreSQL superuser; Odoo connects as this user
      POSTGRES_PASSWORD: komiti-library-prod-local         # must match PASSWORD in the odoo service
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U odoo -d postgres"]  # checks if PostgreSQL is ready to accept connections
      interval: 10s                                        # check every 10 seconds
      timeout: 5s                                          # fail if no response within 5 seconds
      retries: 10                                          # mark unhealthy after 10 consecutive failures
    volumes:
      - postgres-data:/var/lib/postgresql/data             # persist database files across container restarts
    networks:
      library-prod:
        aliases:
          - db                                             # other services find postgres at hostname "db"

  odoo:
    image: odoo:19.0                                       # official Odoo 19 image
    restart: unless-stopped
    depends_on:
      postgres:
        condition: service_healthy                         # wait for pg_isready before starting Odoo
    environment:
      HOST: db                                             # network alias of the postgres service
      PORT: 5432                                           # default PostgreSQL port
      USER: odoo                                           # must match POSTGRES_USER above
      PASSWORD: komiti-library-prod-local                  # must match POSTGRES_PASSWORD above
    ports:
      - "8068:8069"                                        # host:container — access prod Odoo at localhost:8068
    volumes:
      - odoo-data:/var/lib/odoo                            # filestore, sessions (named volume)
      - ../../../custom-addons:/mnt/extra-addons:rw        # your modules from repo root (bind mount)
    networks:
      library-prod:
        aliases:
          - odoo                                           # optional: other services can find odoo by name

volumes:
  postgres-data:                                           # named volume for PostgreSQL data
  odoo-data:                                               # named volume for Odoo filestore

networks:
  library-prod:                                            # isolated network for this stack
Note: the line ../../../custom-addons:/mnt/extra-addons:rw is a bind mount. It tells Docker: “take the custom-addons/ folder from the repo root on your Windows disk and make it appear inside the container at /mnt/extra-addons.” The path starts with ../../../ because the compose file lives three directories deep (infra/local/odoo-prod-docker-desktop/). The dev compose file will have the same line, so both environments see the exact same folder on disk.

Then create the dev compose file at infra/local/odoo-dev-docker-desktop/docker-compose.yml:

name: komiti-library-dev                                   # Docker Compose project name (prefixes container names)

services:
  postgres:
    image: postgres:16                                     # official PostgreSQL 16 image
    restart: unless-stopped                                # auto-restart unless you explicitly stop it
    environment:
      POSTGRES_DB: postgres                                # bootstrap DB for server start; unused later
      POSTGRES_USER: odoo                                  # PostgreSQL superuser; Odoo connects as this user
      POSTGRES_PASSWORD: komiti-library-dev-local          # must match PASSWORD in the odoo service
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U odoo -d postgres"]  # checks if PostgreSQL is ready to accept connections
      interval: 10s                                        # check every 10 seconds
      timeout: 5s                                          # fail if no response within 5 seconds
      retries: 10                                          # mark unhealthy after 10 consecutive failures
    volumes:
      - postgres-data:/var/lib/postgresql/data             # persist database files across container restarts
    networks:
      library-dev:
        aliases:
          - db                                             # other services find postgres at hostname "db"

  odoo:
    image: odoo:19.0                                       # official Odoo 19 image
    restart: unless-stopped
    depends_on:
      postgres:
        condition: service_healthy                         # wait for pg_isready before starting Odoo
    environment:
      HOST: db                                             # network alias of the postgres service
      PORT: 5432                                           # default PostgreSQL port
      USER: odoo                                           # must match POSTGRES_USER above
      PASSWORD: komiti-library-dev-local                   # must match POSTGRES_PASSWORD above
    ports:
      - "8067:8069"                                        # host:container — access dev Odoo at localhost:8067
    volumes:
      - odoo-data:/var/lib/odoo                            # filestore, sessions (named volume)
      - ../../../custom-addons:/mnt/extra-addons:rw        # your modules from repo root (bind mount)
    networks:
      library-dev:
        aliases:
          - odoo                                           # optional: other services can find odoo by name

volumes:
  postgres-data:                                           # named volume for PostgreSQL data
  odoo-data:                                               # named volume for Odoo filestore

networks:
  library-dev:                                             # isolated network for this stack
Note: the key differences between prod and dev are the project name, password, network name, and the host port (8068 for prod, 8067 for dev). Keep POSTGRES_DB as postgres in both files. The real Odoo databases (komiti_library_prod and komiti_library_dev) are created later from the Odoo web UI. If you set POSTGRES_DB directly to those names, Postgres creates empty databases first and Odoo may return HTTP 500 instead of the normal database creation screen.

Step 3 — commit the infrastructure files to main.

PS C:\dev\komiti_library> git add -A
PS C:\dev\komiti_library> git commit -m "Add infra: prod and dev Docker Compose stacks"
PS C:\dev\komiti_library> git push origin main

Infrastructure lives on main. Every feature branch you create later will inherit these files.

Step 4 — bring up the prod Odoo instance.

PS C:\dev\komiti_library> cd infra/local/odoo-prod-docker-desktop
PS C:\dev\komiti_library\infra\local\odoo-prod-docker-desktop> docker compose up -d

Docker will pull the postgres:16 and odoo:19.0 images (this may take a few minutes the first time). Once done, verify:

PS ...> docker compose ps

You should see two services: postgres (healthy) and odoo (running). Open http://localhost:8068 in your browser. You should see the Odoo database creation screen. Create a database named komiti_library_prod — this is your production baseline. Leave it clean for now; don’t install any custom module here yet.

Step 5 — create a feature branch.

PS ...> cd C:\dev\komiti_library
PS C:\dev\komiti_library> git checkout -b 2026-04-01-library-module
PS C:\dev\komiti_library> git push -u origin 2026-04-01-library-module

You are now on a feature branch created from main. The infrastructure files you committed in step 3 are already here.

Step 6 — bring up the dev Odoo instance.

PS C:\dev\komiti_library> cd infra/local/odoo-dev-docker-desktop
PS C:\dev\komiti_library\infra\local\odoo-dev-docker-desktop> docker compose up -d
Note: in the cd command above we used forward slashes (/). PowerShell accepts both / and \ when navigating directories, so cd infra/local/odoo-dev-docker-desktop and cd infra\local\odoo-dev-docker-desktop do the same thing. However, inside files like docker-compose.yml, YAML, Linux paths, and URLs always use forward slashes — backslashes will break them. Rule of thumb: use \ only when Windows requires it (e.g. full absolute paths like C:\dev\...); everywhere else, forward slashes are safer and more portable.

Verify with docker compose ps, then open http://localhost:8067. You should again see the Odoo database creation screen. Create a database named komiti_library_dev — this is your development instance where you will install and test your module as you build it.

Step 7 — understand what you now have.

At this point, two independent Odoo instances are running on your machine, but they differ in two important ways:

Note: this is why “code exists on disk” and “module is installed in Odoo” are not the same thing. Both containers can read the same addon files from custom-addons/, but each database decides independently which modules are actually installed and active.

Stopping and restarting. When you are done working:

Next time you resume, navigate to the compose directory and run docker compose up -d. As long as you haven’t removed the volumes, your database and filestore will still be there.

What to understand from this option: you now have a working two-environment Odoo setup that is functionally equivalent to what Option A produces via Terraform. The difference is that you did not practice Terraform project structure, variables, or the plan/apply cycle. If you later get cloud access, you can return to Option A to learn that layer. For now, you have everything you need to continue with the Odoo module development tutorials.

7) What to read next

The following files live in the odoo4komiti repository. Open them from your local clone:


99) Task on the komiti_academy project for candidates

  1. Write a short infra/runtime diagnostic note for komiti_academy: what would you check first if the module doesn’t work, and what if the runtime itself isn’t healthy.
    Reference: This is explained in chapters 4.8) How AWS, Docker, and Terraform connect into a single flow and the self-check in task 5 below.
  2. For komiti_academy, list which runtime assumptions must be true before actually verifying that the module works.
    Reference: This is explained in chapters 3) Docker and container fundamentals you need to know, 4.5.1) terraform.tfstate, and 4.10) Minimal safe workflow in KomITi.
  3. Explain with one example why a container restart, a module upgrade, and actually verifying the affected flow are not the same thing.
    Reference: This is explained in chapters 3) Docker and container fundamentals you need to know, 4.8) How AWS, Docker, and Terraform connect into a single flow, and 4.10) Minimal safe workflow in KomITi.
  4. Design and locally describe a Terraform lab in your repo komiti_library that uses Docker Desktop to bring up a Postgres and Odoo stack for a local dev runtime, with a clear file structure, variables, outputs, and a clear workflow sequence.
    Reference: This is explained in chapters 4.5.2) variables.tf, terraform.tfvars.example, and terraform.tfvars, 4.3) Directory structure: general and KomITi-specific, 4.9.1) Plan is not a formality, 4.10) Minimal safe workflow in KomITi, and 6) Lab: bring up the Odoo runtime for development.
  5. Self-check: after reading the entire document, answer each of these in one or two sentences without guessing:
    • What does AWS provide and what does it not provide?
    • What is an image and what is a container?
    • What does docker compose do?
    • What is a Terraform provider?
    • What is a resource and what is a data source?
    • Why is Terraform state critical?
    • Why do you read plan before apply?
    • Why do terraform.tfvars and terraform.tfstate not go into git?
    • Why is terraform apply not the same as application verification?
    • How do you read an AWS Terraform folder as a system, not as a collection of random .tf files?
    • How do you distinguish an AWS problem, a Docker/runtime problem, and an Odoo functional problem?
    Reference: these topics are covered throughout this document. If you cannot answer confidently, re-read the relevant sections before continuing.
  6. Practical drill on the real odoo4komiti infra: open infra/aws/odoo-dev-ec2-compose in your local clone and do the following:
    1. Find variables.tf, compute.tf, security.tf, outputs.tf.
    2. Explain which .tf file defines the host, which defines the network boundary, and which defines the operator outputs.
    3. Explain where in that stack Terraform ends and Docker/runtime reasoning begins.
    4. Explain the difference between a shared instruction file such as .github/instructions/terraform.instructions.md and a root stack directory such as infra/aws/odoo-dev-ec2-compose/.
    Reference: 4.3) Directory structure, 4.4) Terraform files in [modules/], 4.5) Terraform files in [root stack/].

Solutions: