I’m currently trying to wrap my head around the best practice approach for schema sync and migration between development and production environments (both in docker), CI/CD and multiple development machines.
From the docs and github discussions I would try the following, does this makes sense? Would this work as intended?
All schema (collections, roles, flows) development happens locally on my machine. Whenever there are changes to the schema, I run a migration via the Schema API endpoints:
1. Schema sync development ↔ production
/schema/snapshot on the local instance and /schema/diff and /schema/apply on the production instance.
2. Schema sync development ↔ git repo
To check in the schema into my repo we can use snapshots like
npx directus schema snapshot /directus-snapshots/$(date +"%Y%m%d%H%M%S").yaml and write this into a mounted docker volume so that it surfaces in my repo.
On another development machine I could then apply the snapshot
npx directus schema apply /directus-snapshots/20250528154312.yaml
and update the instance to the current schema.
Would this approach have any down sides?
Would you recommend to automate the first part in my CI/CD setup. For example placing the schema diff via /schema/diff?export=yaml in my repo and then running the /diff and /apply automatically?
Appreciate any feedback!
Dev/QA/Prod Schema & Data Migration Best Practices:
- Schema changes made in Dev/Staging may be promoted to other envs via:
- API Schema | Directus Docs
- Use Custom Migrations: (node.js sql migrations)
- Open Source Utilities: (Use and/or Adapt to your needs)
- In general, recommend managing all production content(draft → … → published) in Prod
- Use data fields (status, dates, category, etc) with Policy Permissions, Content Versioning, Item Duplication and/or Flows to control publishing process and procedures
- NOTE: This is just a general recommendation
- You may also use flows, hooks, custom migrations, scripts or any other means to trigger pipeline changes from one environment to another using any of the resources, utilities referenced here.
- New content data changes as part of data model updates promoting to other envs – several options here:
- Use Data Studio or APIs to export and import (csv, json, xml)
- Use Custom Extensions Migrations
- Direct Database export/import (csv)
- Use for content data or system table data(be careful here, but useful for some things)
Thank you for pointing out all these different approaches. So many great options 
Last thing, what would you recommend for syncing schema between different dev machines and for having a single source of truth regarding schema (not content)? Checking in snapshot files into the git repo as in my original post?