This is a foundation document for candidates and every new KomITi engineer. It teaches the infrastructure layer from scratch — not through abstract toy examples, but through the real KomITi AWS/Terraform/Docker context.
The purpose is not to turn you into a cloud/platform specialist in 4 hours, but to give you an operational mental model:
- what AWS does in KomITi,
- what Docker and the terminal command
docker composedo, - what Terraform and its terminal commands do,
- and how these three layers connect into a single system.
By the end of this document you will understand the full infra stack, and in the hands-on lab (section 6) you will bring up the Odoo runtime that serves as the foundation for all future tutorials.
Table of Contents
- What is the infra stack in KomITi
- AWS fundamentals you need to know
- Docker and container fundamentals you need to know
- Terraform
- Terraform mental model
- What are provider, resource, data source, and output
- Directory structure: general and KomITi-specific
- Terraform files in [modules/]
- Terraform files in [root stack/]
- Dependency reasoning
- Summarizing what Terraform code typically means in this repo
- How AWS, Docker, and Terraform connect into a single flow
- How to turn Terraform files into action and materialize artifacts
- Minimal safe workflow in KomITi
- Terraform vs Docker Compose: same information, different place of record
- Lab: bring up the Odoo runtime for development
- What to read next
- Task on the komiti_academy project for candidates
1) What is the infra stack in KomITi
When we say “infra” in this repo, we don’t mean a single tool, but multiple layers working together:
- AWS is the cloud substrate: VM, network, IP, security boundary, disk,
- Docker is the runtime packaging and local orchestration layer: how a service is packaged and run in a container, and how multiple services are coordinated via the terminal command
docker compose, - Terraform is the infrastructure-as-code layer that describes and modifies AWS resources,
- Odoo/Caddy/Postgres are the application/runtime workload that lives on that layer.
The professional thinking here is:
- Terraform does not replace Docker,
- Docker does not replace AWS,
- AWS does not replace application verification,
- each layer has its own purpose, its own risk, and its own operational vocabulary.
2) AWS fundamentals you need to know
AWS (Amazon Web Services) is a cloud platform — a collection of on-demand computing services (servers, storage, networking, databases, and more) hosted in Amazon's data centres worldwide. Instead of buying and maintaining physical hardware, you rent exactly the resources you need and pay only for what you use.
In this learning context, you don’t study AWS as a catalog of 200 services, but as a minimal operational set which is:
- region: the geographic/operational context in which resources live, and in which you must consistently reason about latency, availability, and resource locality,
EC2: the virtual machine where the runtime actually lives — in KomITi, a single host carries the entire runtime layer,VPC/ subnet: network space and segmentation — part of production safety, not a side detail,- security group: inbound/outbound firewall boundary that controls who is allowed to talk to your compute (SSH/HTTP/HTTPS),
EIP(Elastic IP): a stable public IP address — how other systems reach the host as a stable endpoint,S3: the object storage layer for backups and operational artifacts, decoupled from the VM lifecycle,Route 53/ DNS thinking: how a domain reaches the right host,- disk/volume reasoning: runtime is not just CPU and RAM but also storage.
3) Docker and container fundamentals you need to know
Docker is a platform for building, shipping, and running applications inside containers. Solomon Hykes created it in 2013 at dotCloud to solve the "works on my machine" problem by packaging an application with everything it needs to run.
A container is an isolated runtime process with its own filesystem view, network namespace, and agreed-upon entrypoint — not a virtual machine. Containers build on Linux kernel features (cgroups and namespaces) that date back to 2006–2008, but Docker made them practical and accessible.
You need to know these concepts:
- image — the template from which a container is created.
- container — a running instance of that image.
- volume — persistent data you don’t want to lose when the container is re-created.
- port mapping — how a service inside a container becomes accessible to the host.
- environment variables — runtime configuration injected at startup.
docker compose— a declarative YAML description of multiple services working together.
In the KomITi stack you will encounter these patterns early:
- Odoo rarely runs alone — it needs Postgres, a reverse proxy, and other helper layers.
docker composeis the contract that defines how these services live together.- Restarting a container is not the same as rebuilding an image.
- Ephemeral container filesystem is not the same as persistent volume data.
3.1) Docker file map in odoo4komiti
Once you understand the Docker concepts, the next question is: where do those decisions live in the odoo4komiti repo?
This is the Docker equivalent of the Terraform file map in section 4: you do not read the repo as a random folder dump; you read it by responsibility.
3.2) docker-compose.yml
docker-compose.yml is the entry point of the local Docker stack. Docker Compose was introduced by Docker in 2014 so teams could define a multi-container application in one YAML file.
In odoo4komiti, this file answers the practical runtime questions:
- which services exist (here:
odoo-webanddb) and which image each service uses or builds,
services:
odoo-web:
build:
context: . # build context = repo root
dockerfile: Dockerfile.odoo # custom Dockerfile (adds boto3 to stock Odoo image)
image: odoo:19.0-boto3 # tag for the built image
db:
image: postgres:16 # official PostgreSQL 16 image from Docker Hub
- which ports are exposed to the host,
odoo-web:
ports:
- "8069:8069" # host:container — access Odoo at localhost:8069
db:
ports:
- "5432:5432" # host:container — expose PostgreSQL to host (dev convenience)
- which volumes persist data,
odoo-web:
volumes:
- odoo-data:/var/lib/odoo # filestore, sessions (named volume)
- ./config/${ODOO_CONF_FILE:-odoo.conf}:/etc/odoo/odoo.conf:ro # Odoo config file (read-only bind mount)
- ./custom-addons:/mnt/extra-addons:rw # your custom modules (bind mount)
- ./third-party-addons:/mnt/third-party-addons:rw # OCA / community modules (bind mount)
db:
volumes:
- pg-data:/var/lib/postgresql/data # database files (named volume)
- which environment variables are injected,
odoo-web:
environment:
AWS_ACCESS_KEY_ID: # passed from host; used for S3 backups (not needed locally)
AWS_SECRET_ACCESS_KEY: # passed from host; used for S3 backups (not needed locally)
AWS_DEFAULT_REGION: # passed from host; used for S3 backups (not needed locally)
HOST: db # network alias of the postgres service
PORT: 5432 # default PostgreSQL port
USER: odoo # must match POSTGRES_USER in db service
PASSWORD: odoo123 # must match POSTGRES_PASSWORD in db service
db:
environment:
POSTGRES_DB: postgres # bootstrap DB for server start; will be unused after admin creates its own
POSTGRES_USER: odoo # PostgreSQL superuser; Odoo connects as this user for ALL database operations
POSTGRES_PASSWORD: odoo123 # must match PASSWORD in the odoo-web service
odoo is a community convention used in every official Odoo Docker example — changing it does not add real security (that is security through obscurity). What matters is a strong password. In production, replace odoo123 with a long random string; the username can stay odoo.
┌─────────────────────────────────────────────┐
│ odoo service │
│ │
│ environment: │
│ HOST: db ─────────────────────────┐ │
│ PORT: 5432 ───────────────────┐ │ │
│ USER: odoo ───────────────┐ │ │ │
│ PASSWORD: │ │ │ │
│ komiti-library-dev-local│ │ │ │
│ │ │ │ │
└──────────────────────────────┼───┼───┼──────┘
│ │ │
must match ══════╪═══╪═══╪══════
│ │ │
┌──────────────────────────────┼───┼───┼──────┐
│ postgres service │ │ │ │
│ ▼ ▼ ▼ │
│ environment: │
│ POSTGRES_USER: odoo ◄─────┘ │ │ │
│ POSTGRES_PASSWORD: │ │ │
│ komiti-library-dev-local ◄──┘ │ │
│ POSTGRES_DB: postgres │ │
│ │ │
│ listens on port 5432 ◄─────────────┘ │
│ │ │
│ networks: │ │
│ library-dev: │ │
│ aliases: │ │
│ - db ◄────────────────────────┘ │
│ │
└─────────────────────────────────────────────┘
- which service depends on which other service.
odoo-web:
depends_on:
db:
condition: service_healthy # Odoo waits until pg_isready passes before starting
If you want to understand why Odoo appears on localhost:8069, why Postgres is on 5432, why the filestore survives a container restart, or why Odoo waits for the database healthcheck, docker-compose.yml is the first file you should read.
A junior mistake is to leave too much at the default value. If you want to increase security, the first things that must change are:
- the default database password,
- publicly exposed ports — for example, do not expose Postgres unnecessarily,
- plain-text credentials handling — do not keep production-grade secrets in a local-style
.env, and do not treat a development Docker Compose file as a production security baseline.
3.3) Dockerfile.odoo
Dockerfile.odoo defines the custom image for the Odoo service. A Dockerfile was introduced with Docker itself in 2013 so an image build could be described as code instead of manual shell steps.
In this repo, Dockerfile.odoo starts from the official odoo:19.0 base image, switches to root, installs python3-boto3, and then switches back to the odoo user. That tells you something important about responsibility:
docker-compose.ymlsays how services run together,Dockerfile.odoosays what the Odoo image contains before the container starts.
If a Python package is missing inside the Odoo container, that is usually a Dockerfile.odoo question, not a Compose question.
3.4) .env and config/*.conf
.env provides local environment variables for Docker Compose. In this repo it selects which Odoo config file to mount and can also provide AWS-related variables needed by the local runtime. Because these values can be sensitive and machine-specific, .env is local-only and must not be treated like normal committed source code.
The config/ directory holds the Odoo application configuration itself:
odoo.conf— the shared/default config,odoo.local.conf— your local override,odoo.local.conf.example— the template you copy when setting up a machine.
The short mental model is: .env decides which config and variables the container receives; config/*.conf decides how Odoo behaves after it starts.
4) Terraform
4.1) Terraform mental model
Terraform is an infrastructure-as-code (IaC) tool — you describe the infrastructure you want in code, and the tool builds it for you. HashiCorp released it in 2014 to give teams a single declarative language for provisioning resources across any cloud provider.
The core loop:
- Describe — you write the desired state in
.tffiles. In the KomITi stack, both DEV and PROD environments are defined this way, not through the AWS console. - Compare — Terraform diffs your code against what actually exists in AWS.
- Plan —
terraform planshows exactly what will change before anything touches production. - Apply —
terraform applyexecutes the plan. - State —
terraform.tfstaterecords what Terraform owns. You will find it on your local disk (not in git), e.g.infra/aws/odoo-dev-ec2-compose/terraform.tfstate.
4.2) What are provider, resource, data source, and output
A provider is a plugin that enables Terraform to communicate with a given system. In this repo, the most important one is the AWS provider — Terraform by itself doesn’t know what EC2, VPC, or Elastic IP is; the AWS provider gives it that vocabulary and API bridge.
resource describes something that Terraform creates or modifies, such as an EC2 instance, security group, elastic IP, route table etc.
data reads something that already exists without creating or modifying it. For example, you might use a data block to look up the latest Ubuntu AMI ID so your EC2 resource can reference it. The key difference:
- resource — Terraform owns the lifecycle (create, update, destroy).
- data source — Terraform only reads. If you delete the
datablock, nothing gets destroyed in AWS — the thing it referenced still exists.
output exposes a value to your terminal so you can use it after terraform apply finishes. Without outputs, Terraform creates the infrastructure but you have to dig through the AWS console to find what you need. In the KomITi stack, typical outputs are the server’s public IP, the SSH command to connect, and the HTTP/HTTPS URL — everything you need to start working with the environment immediately.
4.3) Directory structure: general and KomITi-specific
When you first read a Terraform repo, it’s not enough to know what a resource is; you also need to know where things live.
The KomITi-specific layout looks roughly like this:
A short rule for reading this layout: modules/odoo_ec2_compose/main.tf and modules/odoo_ec2_compose/variables.tf belong to the reusable module, while odoo-dev-ec2-compose/main.tf and odoo-dev-ec2-compose/variables.tf belong to the root stack that calls that module. In other words, they are not duplicates serving the same role: the module has its own internal Terraform API, and the root stack has its own environment-level API.
Practically, when we say “Terraform code for DEV”, in this repo that most often means: open infra/aws/odoo-dev-ec2-compose/ and read that directory as a single infrastructure system.
Keep in mind that the name odoo-dev-ec2-compose already carries 3 layers within it:
ec2= AWS compute host,compose= Docker runtime orchestration on that host,- Terraform
.tffiles = the infrastructure description that creates that host and its boundary.
4.4) Terraform files in [modules/]
Once you see that odoo_ec2_compose/ exists in modules/, it’s important not to think of it as just a helper folder. It is a reusable Terraform building block that root stacks call.
There are 3 key files in that module:
4.4.1) main.tf
main.tf in the module describes the internal logic of the reusable block: which AWS resources the module creates and how they are connected. In our KomITi example, the module builds multiple layers at once (VPC, subnet, route table, security group, etc.) so the root stack doesn’t have to write them from scratch every time. If you want to understand what odoo_ec2_compose actually does, the module main.tf is the first place to read.
4.4.2) variables.tf
variables.tf in the module defines the input API of that reusable block. Here the module says: if you want to use me, you must provide or may provide values such as name_prefix, env, allowed_ssh_cidr, instance_type, ssh_public_key, and so on. In other words: the root stack calls the module via its own main.tf, and the module’s variables.tf determines what that call is allowed and required to pass.
4.4.3) outputs.tf
outputs.tf in the module defines what the module returns back to the root stack. In our case, these are important operational values such as public IP, elastic IP, or backup bucket name. This matters because the root stack often doesn’t just want to “fire up” the module, but also to take some of its results and show them to the operator or pass them further.
4.5) Terraform files in [root stack/]
Once you understand the directory structure, it makes sense to also read what the key Terraform files inside that stack do.
4.5.1) Terraform State terraform.tfstate
terraform.tfstate is the critical artifact that contains the state of all resources.
State remembers:
- which resources Terraform considers its own,
- their IDs,
- attributes needed for the next plan,
- the dependency graph that has already been materialized at runtime.
Key rules:
- state is not committed to git (it may contain sensitive data),
- losing state is not a “minor problem” but an operational problem,
- source code alone is not sufficient: Terraform state + cloud runtime = the actual truth,
- do not make manual changes in the AWS console — code can say one thing, state can remember another, and the cloud runtime can look like a third (drift),
- if you manually change a resource in an incident, document what was done and return to code discipline as soon as possible,
- read
terraform planalso as a drift detector, not just as an apply prelude.
4.5.2) variables.tf, terraform.tfvars.example, and terraform.tfvars
Variables are the input parameters of a Terraform configuration.
They exist so that code doesn’t hardcode:
- CIDR rules,
- instance type,
- domain,
- key naming/runtime values,
- sometimes credentials/secrets, if the model allows it.
There are 3 distinct files to understand here:
variables.tf defines which input parameters exist, their types, and what is optional or has a default. It is the schema the stack expects (e.g. instance_type, domain_name, allowed_ssh_cidr).
terraform.tfvars contains the actual local values used during plan/apply (terraform.tfvars.example is a template for it). Important: DEV and PROD intentionally don’t use the same secret model. For PROD, terraform.tfvars must not contain production secrets — those live on the PROD EC2 server itself, not in Terraform git artifacts.
A Terraform beginner assumes it’s normal to put every password in tfvars. Think about whether that value ends up in state, who has access to that state, and whether the secret belongs to the Terraform layer or the bootstrap/runtime layer.
Key discipline:
terraform.tfvarscan be sensitive — it does not go into git,- for PROD, don’t push DB/Odoo passwords into Terraform if they end up in state,
terraform.tfstateandterraform.tfstate.backupare local state artifacts and must not be version-controlled.
4.5.3) main.tf
main.tf is the entry point of a root stack: which modules are used, how they are called, and with which values. Both this file and the module main.tf from 4.4.1) are called main.tf, but they operate at different levels: the module one is internal implementation, this one is environment-level orchestration.
4.5.4) network.tf
network.tf is the file where the network topology logic most often lives: VPC, subnets, route tables, internet gateway, and similar. If you want to understand where EC2 actually lives and how it even reaches the internet, this is one of the first files you should read.
4.5.5) outputs.tf
outputs.tf is the file that declares which important values Terraform exposes to the operator after apply. These are often public IP, SSH command, URL, or some other identifier you need immediately for the next operational step.
That’s why outputs.tf is important: it’s the bridge between infra code and operator work. Without it, Terraform can successfully create resources while the person doing the deploy still can’t quickly see what their most important next entry point into the system is.
4.6) Dependency reasoning
One of the most important Terraform concepts is the dependency graph.
Terraform must know:
- what depends on what,
- in what order resources are created,
- what may only be destroyed after what,
- where a reference means an implicit dependency.
Practically:
- EC2 can depend on the subnet and security group,
- route table association depends on the network,
- an output often depends on a resource that was just created.
On the local komiti_academy Docker Desktop lab, that dependency graph looks roughly like this. If you have Graphviz dot installed, you can generate it from the terminal:
terraform graph | dot -Tpng > graph.png
Here is the same relationship as an ASCII diagram:
Read the arrows as “depends on”: docker_container.odoo depends on docker_container.postgres because Odoo cannot start without a running database. Postgres in turn depends on its image, a shared network, and a persistent volume for data. Odoo also needs its own image and a volume for the filestore. Terraform reads this graph and creates resources in the right order — volumes and images first, then postgres, then odoo.
A common junior mistake is to read Terraform file by file. The dependency graph is a better starting point — it shows you the actual creation order and which resources are connected.
4.7) Summarizing what Terraform code typically means in this repo
When reading AWS Terraform directories, think like this:
network.tf= networking topologysecurity.tf= security boundarycompute.tf= instance/runtime hostlocals.tf= naming/tagging/helper compositionvariables.tf= what is configurableoutputs.tf= what the operator needs after applytemplates/*.tpl= rendered bootstrap/user-data content
This is the infrastructure equivalent of the Odoo mental model:
- model → resource
- field → argument/attribute
- action/menu wiring → dependency/reference wiring
- runtime upgrade → plan/apply cycle
4.8) How AWS, Docker, and Terraform connect into a single flow
The most useful foundations mental model for this repo is the following sequence:
- Terraform defines AWS resources.
- AWS provides the host, network, security boundary, and public endpoint.
- On that host, Docker/Compose runs the application services.
- Only then do you measure whether the Odoo runtime is actually healthy.
That’s why it’s important not to mix problem classes:
- if a security group isn’t allowing traffic, that’s not a Docker bug,
- if a container won’t start a service, that’s not necessarily a Terraform bug,
- if Odoo is up but a functional flow doesn’t work, that’s no longer a pure infra problem.
4.9) How to turn Terraform files into action and materialize artifacts (Docker containers, AWS resources, Odoo)
4.9.1) Plan is not a formality. terraform plan is a terminal command, and it is not a checkbox before apply.
Its purpose is to clearly show you:
- what will be created,
- what will be modified,
- what will be deleted,
- whether a seemingly small change has a large blast radius.
Professional thinking means:
- first you read the plan,
- then you assess the impact,
- only then do you run
apply.
If you can’t read a Terraform plan, then you are not yet operationally safe for infrastructure work.
4.9.2) Apply is not a deploy script. The terminal command terraform apply does not mean “I launched the server and I’m done”.
Apply means:
- Terraform has applied an infrastructure change,
- state has been updated,
- the cloud resource layer has been brought closer to the described state.
But that is not the same as:
- the application being functional,
- the compose stack being healthy,
- Odoo being operational,
- day-2 ops steps being completed.
In the KomITi AWS context, the flow often goes like this:
- Terraform brings up the infrastructure skeleton.
- The bootstrap/compose/runtime layer brings the application to an operational state.
- Then verification follows.
This is the same mental discipline as with Odoo: infra code truth is not the same as runtime truth.
One more important command: terraform destroy tears down all resources Terraform manages. In a lab, it is how you cleanly remove everything. In production, it is irreversible and must never be run without explicit intent and backup.
4.10) Minimal safe workflow in KomITi
When making a Terraform change, the minimum safe sequence is:
- understand what you’re changing at the resource level,
- verify variables and environment context,
- run
terraform initif needed, - run
terraform validate, - run
terraform plan, - read the impact,
- only then run
terraform apply, - verify outputs and runtime,
- document the operational delta if significant.
This is not bureaucracy; it is basic production discipline.
5) Terraform vs Docker Compose: same information, different place of record
Terraform files and Docker Compose complement each other, since you use each for what it does best. This is the case in odoo4komiti, which is a real production system. But in this komiti_academy lab — which is much simpler than odoo4komiti — you should view these two approaches as mutually exclusive variants (you use either Terraform or Docker Compose, not both simultaneously for the same runtime). Below is a mapping of the local komiti_academy lab so you can clearly see where the same runtime information is recorded in the Terraform variant versus the Compose variant.
| Information | Terraform | Compose |
|---|---|---|
| Local academy runtime name | locals.tf – name_prefix |
docker-compose.yml – name: komiti-academy-dev |
| Odoo image | default: variables.tf – odoo_image = odoo:19.0actual value: terraform.tfvars – odoo:19.0where applied: compute.tf – docker_image.odoo |
docker-compose.yml – services.odoo.image |
| Postgres image | default: variables.tf – postgres_image = postgres:16actual value: terraform.tfvars – postgres:16where applied: compute.tf – docker_image.postgres |
docker-compose.yml – services.postgres.image |
| Host port for Odoo | default: variables.tf – odoo_port = 8067actual value: terraform.tfvars – odoo_port = 8067where applied: compute.tf – ports.external = var.odoo_port |
docker-compose.yml – services.odoo.ports |
Mapping 8067 → 8069 |
compute.tf – ports { external = var.odoo_port, internal = 8069 } |
docker-compose.yml – "${ODOO_PORT:-8067}:8069" |
| Initial Postgres DB | default: variables.tf – postgres_db = postgresactual value: terraform.tfvars – postgreswhere applied: compute.tf – POSTGRES_DB = ${var.postgres_db} |
docker-compose.yml – POSTGRES_DB |
| Postgres user | default: variables.tf – postgres_user = odooactual value: terraform.tfvars – admin.komiti_odoowhere applied: compute.tf – POSTGRES_USER and Odoo USER |
docker-compose.yml – POSTGRES_USER and Odoo USER |
| Postgres password | default: variables.tf – postgres_password = no default valueactual value: terraform.tfvars – komiti-academy-local-devwhere applied: compute.tf – POSTGRES_PASSWORD and Odoo PASSWORD |
docker-compose.yml – POSTGRES_PASSWORD and Odoo PASSWORD |
Odoo connects to Postgres on host db |
compute.tf – Odoo env HOST=db and Postgres network alias db |
docker-compose.yml – Odoo HOST: db and network alias db |
| Addons bind mount | locals.tf – addons_host_pathcompute.tf – used in volume mount to /mnt/extra-addons |
docker-compose.yml – ../../../custom-addons:/mnt/extra-addons:rw |
| Odoo data volume | compute.tf – docker_volume.odoo_data |
docker-compose.yml – odoo-data:/var/lib/odoo |
| Postgres data volume | compute.tf – docker_volume.postgres_data |
docker-compose.yml – postgres-data:/var/lib/postgresql/data |
| Network | network.tf – docker_network.odoo |
docker-compose.yml – networks.academy |
| Odoo dependency on Postgres | compute.tf – depends_on = [docker_container.postgres] |
docker-compose.yml – depends_on.postgres.condition: service_healthy |
| URL output | outputs.tf – odoo_url |
No output block; the practical equivalent is ports and the command docker compose ps |
Especially important notes:
| Topic | Terraform | Compose | What this means |
|---|---|---|---|
| Master Password for localhost Odoo | Not explicitly defined | Not explicitly defined | None of your academy files currently set admin_passwd, so the password you see in the browser does not come directly from this Terraform/Compose wiring. |
| Postgres healthcheck | None | Present | The Compose variant is a bit richer here, because it has an explicit healthcheck and waits for Postgres to be healthy before starting Odoo. |
| Docker Desktop UI grouping | No Compose metadata | Has Compose metadata | That’s why Compose is grouped in the Docker Desktop UI, while Terraform Docker provider resources look more like individual Docker objects. |
| State | terraform.tfstate |
No single state file | Terraform remembers the desired and actual state in its state, while Compose relies more on the current Docker engine state and Compose project metadata labels. |
6) Lab: bring up the Odoo runtime for development
In this lab the candidate will bring up the runtime on which you will — from zero to hero — develop the Odoo module. This lab is not part of the core product scope of the komiti_academy module, but a learning/support exercise so you can locally run and verify everything you build in the later tutorials.
During this lab you will bring up two separate Odoo instances on your machine:
- prod — tied to the
mainbranch, running on port8068. This represents your production-equivalent baseline: only released, verified code is installed here. - dev — tied to your feature branch, running on port
8067. This is where you actively develop, install your in-progress module, and test before merging.
For simplicity we skip the staging branch in this lab. Your workflow will be feature → main (with a PR in between). Both instances share the same custom-addons/ folder on disk, but their databases are completely independent — so the prod database won’t have your half-finished module installed, even though the files are visible on the filesystem.
There are two options depending on your available resources:
Option A — for candidates WITH an AWS/Azure lab budget
If you have access to a cloud account where you can provision resources, this option lets you practice Terraform on real infrastructure. The purpose is not just to get a running Odoo, but for you to practically exercise:
- the structure of a Terraform project,
variables.tf,outputs.tf,terraform.tfvars.example, and a localterraform.tfvars,- separation between
devandprodvariants of a stack, - naming conventions,
- the
plan/apply/destroycycle, - the idea that Terraform brings the runtime skeleton to a state where Odoo can start.
Even without real cloud, you can run the same Terraform exercise locally via the Docker provider. In that case, this lab does not teach real cloud networking, VM lifecycle, or cloud security boundaries. View it as a bridge: patterns around variables, outputs, naming, and environment separation transfer directly to AWS/Azure Terraform work later, but the local variant is not a 1:1 cloud template.
The suggested lab should live in your repo komiti_library, for example like this:
infra/local/odoo-dev-docker-desktop/infra/local/odoo-prod-docker-desktop/
Within each of those two root directories, the candidate should have at least:
versions.tfmain.tfvariables.tfoutputs.tfterraform.tfvars.example- a local
terraform.tfvarsthat doesn’t go into git
The minimal runtime that Terraform should bring up is:
- one Docker network,
- one Postgres container,
- one Odoo container,
- a volume for Postgres data,
- volume or mount reasoning for Odoo config/addons if the lab reaches that phase,
- an operator output that says on which URL Odoo should be accessible.
A good minimum viable lab is:
- Terraform Docker provider locally manages the Docker Desktop runtime.
main.tfcreates the network, volume, and both containers.variables.tfcarries the environment name, port, DB credentials for the lab, and naming prefix.outputs.tfoutputs a URL likehttp://localhost:8067and key resource names.terraform.tfvars.exampleexists for bothdevandprod, so the candidate sees the separation even when both stacks live locally.
The bootstrap sequence in this lab means:
- Terraform first brings the Docker resources to the desired state,
- then the Odoo container receives the environment/config to communicate with Postgres,
- only then do you verify whether the runtime is actually up.
The minimum safe lab flow follows the same sequence as section 4.10, ending with terraform destroy -var-file=terraform.tfvars to cleanly tear down the lab. If you successfully complete this path, you have truly practiced the Terraform shape and lifecycle discipline.
Option B — for candidates WITHOUT an AWS/Azure lab budget
Don’t worry — you can still do everything needed to continue learning. In this option you skip Terraform entirely and use Docker Compose to bring up Odoo directly on Docker Desktop. The result is the same two running Odoo instances (prod + dev) you need for the module development tutorials; the only difference is that Terraform infrastructure management is not part of your exercise.
Prerequisites:
- Docker Desktop installed and running (Windows, macOS, or Linux).
- Your
komiti_libraryrepo cloned locally (you did this in tutorial 02, section 4.4).
Step 1 — open a terminal and make sure you are on main.
PS C:\Users\you> cd C:\dev\komiti_library
PS C:\dev\komiti_library> git checkout main
PS C:\dev\komiti_library> git pull origin main
You start on main because infrastructure files belong to all branches — they are not feature-specific. By committing them to main first, every future feature branch will inherit them automatically.
Step 2 — create the directory structure and compose files.
You need two directories — one for each environment. You also need a custom-addons/ folder at the repo root (this is where your Odoo modules will live later). Create them:
PS C:\dev\komiti_library> mkdir infra/local/odoo-prod-docker-desktop
PS C:\dev\komiti_library> mkdir infra/local/odoo-dev-docker-desktop
PS C:\dev\komiti_library> mkdir custom-addons
PS C:\dev\komiti_library> echo $null > custom-addons/.gitkeep
echo $null > custom-addons/.gitkeep creates an empty file called .gitkeep. Git does not track empty directories, so without at least one file inside custom-addons/ the folder would not appear in the repository. The name .gitkeep is a widely used convention — Git itself does not treat it specially.
Now create the prod compose file at infra/local/odoo-prod-docker-desktop/docker-compose.yml:
name: komiti-library-prod # Docker Compose project name (prefixes container names)
services:
postgres:
image: postgres:16 # official PostgreSQL 16 image
restart: unless-stopped # auto-restart unless you explicitly stop it
environment:
POSTGRES_DB: postgres # bootstrap DB for server start; unused later
POSTGRES_USER: odoo # PostgreSQL superuser; Odoo connects as this user
POSTGRES_PASSWORD: komiti-library-prod-local # must match PASSWORD in the odoo service
healthcheck:
test: ["CMD-SHELL", "pg_isready -U odoo -d postgres"] # checks if PostgreSQL is ready to accept connections
interval: 10s # check every 10 seconds
timeout: 5s # fail if no response within 5 seconds
retries: 10 # mark unhealthy after 10 consecutive failures
volumes:
- postgres-data:/var/lib/postgresql/data # persist database files across container restarts
networks:
library-prod:
aliases:
- db # other services find postgres at hostname "db"
odoo:
image: odoo:19.0 # official Odoo 19 image
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy # wait for pg_isready before starting Odoo
environment:
HOST: db # network alias of the postgres service
PORT: 5432 # default PostgreSQL port
USER: odoo # must match POSTGRES_USER above
PASSWORD: komiti-library-prod-local # must match POSTGRES_PASSWORD above
ports:
- "8068:8069" # host:container — access prod Odoo at localhost:8068
volumes:
- odoo-data:/var/lib/odoo # filestore, sessions (named volume)
- ../../../custom-addons:/mnt/extra-addons:rw # your modules from repo root (bind mount)
networks:
library-prod:
aliases:
- odoo # optional: other services can find odoo by name
volumes:
postgres-data: # named volume for PostgreSQL data
odoo-data: # named volume for Odoo filestore
networks:
library-prod: # isolated network for this stack
../../../custom-addons:/mnt/extra-addons:rw is a bind mount. It tells Docker: “take the custom-addons/ folder from the repo root on your Windows disk and make it appear inside the container at /mnt/extra-addons.” The path starts with ../../../ because the compose file lives three directories deep (infra/local/odoo-prod-docker-desktop/). The dev compose file will have the same line, so both environments see the exact same folder on disk.
Then create the dev compose file at infra/local/odoo-dev-docker-desktop/docker-compose.yml:
name: komiti-library-dev # Docker Compose project name (prefixes container names)
services:
postgres:
image: postgres:16 # official PostgreSQL 16 image
restart: unless-stopped # auto-restart unless you explicitly stop it
environment:
POSTGRES_DB: postgres # bootstrap DB for server start; unused later
POSTGRES_USER: odoo # PostgreSQL superuser; Odoo connects as this user
POSTGRES_PASSWORD: komiti-library-dev-local # must match PASSWORD in the odoo service
healthcheck:
test: ["CMD-SHELL", "pg_isready -U odoo -d postgres"] # checks if PostgreSQL is ready to accept connections
interval: 10s # check every 10 seconds
timeout: 5s # fail if no response within 5 seconds
retries: 10 # mark unhealthy after 10 consecutive failures
volumes:
- postgres-data:/var/lib/postgresql/data # persist database files across container restarts
networks:
library-dev:
aliases:
- db # other services find postgres at hostname "db"
odoo:
image: odoo:19.0 # official Odoo 19 image
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy # wait for pg_isready before starting Odoo
environment:
HOST: db # network alias of the postgres service
PORT: 5432 # default PostgreSQL port
USER: odoo # must match POSTGRES_USER above
PASSWORD: komiti-library-dev-local # must match POSTGRES_PASSWORD above
ports:
- "8067:8069" # host:container — access dev Odoo at localhost:8067
volumes:
- odoo-data:/var/lib/odoo # filestore, sessions (named volume)
- ../../../custom-addons:/mnt/extra-addons:rw # your modules from repo root (bind mount)
networks:
library-dev:
aliases:
- odoo # optional: other services can find odoo by name
volumes:
postgres-data: # named volume for PostgreSQL data
odoo-data: # named volume for Odoo filestore
networks:
library-dev: # isolated network for this stack
8068 for prod, 8067 for dev). Keep POSTGRES_DB as postgres in both files. The real Odoo databases (komiti_library_prod and komiti_library_dev) are created later from the Odoo web UI. If you set POSTGRES_DB directly to those names, Postgres creates empty databases first and Odoo may return HTTP 500 instead of the normal database creation screen.
Step 3 — commit the infrastructure files to main.
PS C:\dev\komiti_library> git add -A
PS C:\dev\komiti_library> git commit -m "Add infra: prod and dev Docker Compose stacks"
PS C:\dev\komiti_library> git push origin main
Infrastructure lives on main. Every feature branch you create later will inherit these files.
Step 4 — bring up the prod Odoo instance.
PS C:\dev\komiti_library> cd infra/local/odoo-prod-docker-desktop
PS C:\dev\komiti_library\infra\local\odoo-prod-docker-desktop> docker compose up -d
Docker will pull the postgres:16 and odoo:19.0 images (this may take a few minutes the first time). Once done, verify:
PS ...> docker compose ps
You should see two services: postgres (healthy) and odoo (running). Open http://localhost:8068 in your browser. You should see the Odoo database creation screen. Create a database named komiti_library_prod — this is your production baseline. Leave it clean for now; don’t install any custom module here yet.
Step 5 — create a feature branch.
PS ...> cd C:\dev\komiti_library
PS C:\dev\komiti_library> git checkout -b 2026-04-01-library-module
PS C:\dev\komiti_library> git push -u origin 2026-04-01-library-module
You are now on a feature branch created from main. The infrastructure files you committed in step 3 are already here.
Step 6 — bring up the dev Odoo instance.
PS C:\dev\komiti_library> cd infra/local/odoo-dev-docker-desktop
PS C:\dev\komiti_library\infra\local\odoo-dev-docker-desktop> docker compose up -d
cd command above we used forward slashes (/). PowerShell accepts both / and \ when navigating directories, so cd infra/local/odoo-dev-docker-desktop and cd infra\local\odoo-dev-docker-desktop do the same thing. However, inside files like docker-compose.yml, YAML, Linux paths, and URLs always use forward slashes — backslashes will break them. Rule of thumb: use \ only when Windows requires it (e.g. full absolute paths like C:\dev\...); everywhere else, forward slashes are safer and more portable.
Verify with docker compose ps, then open http://localhost:8067. You should again see the Odoo database creation screen. Create a database named komiti_library_dev — this is your development instance where you will install and test your module as you build it.
Step 7 — understand what you now have.
At this point, two independent Odoo instances are running on your machine, but they differ in two important ways:
- Ports are different:
- prod runs at
localhost:8068— clean baseline, no custom module installed. - dev runs at
localhost:8067— your development playground, tied to the feature branch.
- prod runs at
- The addon folder is shared on disk, but module installation is per database:
- prod sees the same
custom-addons/files because it mounts the same folder, but you keep the module uninstalled there until you intentionally install a stable, released version. - dev also sees the same files, and this is the instance where you install, upgrade, and test the module while you develop it. The installed state lives in the dev database, not in the folder itself.
- prod sees the same
custom-addons/, but each database decides independently which modules are actually installed and active.
Stopping and restarting. When you are done working:
docker compose stop— stops containers but keeps data (run from the respective directory).docker compose down— stops and removes containers but keeps volumes (data survives).docker compose down -v— removes everything including data.
Next time you resume, navigate to the compose directory and run docker compose up -d. As long as you haven’t removed the volumes, your database and filestore will still be there.
What to understand from this option: you now have a working two-environment Odoo setup that is functionally equivalent to what Option A produces via Terraform. The difference is that you did not practice Terraform project structure, variables, or the plan/apply cycle. If you later get cloud access, you can return to Option A to learn that layer. For now, you have everything you need to continue with the Odoo module development tutorials.
7) What to read next
The following files live in the odoo4komiti repository. Open them from your local clone:
odoo4komiti/.github/instructions/terraform.instructions.mdodoo4komiti/infra/aws/odoo-dev-ec2-compose/README.mdodoo4komiti/infra/aws/odoo-dev-ec2-compose/RUNBOOK.mdodoo4komiti/infra/aws/odoo-prod-ec2-compose/README.mdodoo4komiti/infra/aws/odoo-prod-ec2-compose/RUNBOOK.md
99) Task on the komiti_academy project for candidates
-
Write a short infra/runtime diagnostic note for
komiti_academy: what would you check first if the module doesn’t work, and what if the runtime itself isn’t healthy.
Reference: This is explained in chapters 4.8) How AWS, Docker, and Terraform connect into a single flow and the self-check in task 5 below. -
For
komiti_academy, list which runtime assumptions must be true before actually verifying that the module works.
Reference: This is explained in chapters 3) Docker and container fundamentals you need to know, 4.5.1) terraform.tfstate, and 4.10) Minimal safe workflow in KomITi. -
Explain with one example why a container restart, a module upgrade, and actually verifying the affected flow are not the same thing.
Reference: This is explained in chapters 3) Docker and container fundamentals you need to know, 4.8) How AWS, Docker, and Terraform connect into a single flow, and 4.10) Minimal safe workflow in KomITi. -
Design and locally describe a Terraform lab in your repo
komiti_librarythat uses Docker Desktop to bring up a Postgres and Odoo stack for a localdevruntime, with a clear file structure, variables, outputs, and a clear workflow sequence.
Reference: This is explained in chapters 4.5.2)variables.tf,terraform.tfvars.example, andterraform.tfvars, 4.3) Directory structure: general and KomITi-specific, 4.9.1) Plan is not a formality, 4.10) Minimal safe workflow in KomITi, and 6) Lab: bring up the Odoo runtime for development. -
Self-check: after reading the entire document, answer each of these in one or two sentences without guessing:
- What does AWS provide and what does it not provide?
- What is an image and what is a container?
- What does
docker composedo? - What is a Terraform provider?
- What is a
resourceand what is adatasource? - Why is Terraform state critical?
- Why do you read
planbeforeapply? - Why do
terraform.tfvarsandterraform.tfstatenot go into git? - Why is
terraform applynot the same as application verification? - How do you read an AWS Terraform folder as a system, not as a collection of random
.tffiles? - How do you distinguish an AWS problem, a Docker/runtime problem, and an Odoo functional problem?
-
Practical drill on the real
odoo4komitiinfra: openinfra/aws/odoo-dev-ec2-composein your local clone and do the following:- Find
variables.tf,compute.tf,security.tf,outputs.tf. - Explain which
.tffile defines the host, which defines the network boundary, and which defines the operator outputs. - Explain where in that stack Terraform ends and Docker/runtime reasoning begins.
- Explain the difference between a shared instruction file such as
.github/instructions/terraform.instructions.mdand a root stack directory such asinfra/aws/odoo-dev-ec2-compose/.
- Find
Solutions: