Getting Started
DockerForge generates production-ready Dockerfiles from any Git repository. Paste a URL, click Generate, and you get a complete, working Dockerfile in seconds. No config files required.
Enter a GitHub, GitLab, or Bitbucket URL into the input field. Supports specific branches and subfolders too.
DockerForge fetches your repo structure, detects the stack, and builds a multi-stage Dockerfile optimised for production.
Copy the Dockerfile, .dockerignore, and deploy commands. Place them in your project root and run the build.
Input Methods
There are three ways to provide your project to DockerForge. Use whichever fits your workflow.
find or tree. DockerForge infers the project structure from the output.Supported URL formats
All of the following resolve correctly:
https://github.com/org/repo https://github.com/org/repo/tree/main/packages/api https://gitlab.com/org/repo https://bitbucket.org/org/repo
git@github.com:org/repo.git to the HTTPS equivalent before pasting.
Hints (optional)
Expand the Hints panel in the input card to provide extra context such as your Node version, exposed ports, or environment variable names. Hints override auto-detection when you know your setup better than the analyser does.
Supported Repositories
DockerForge reads repositories directly via provider APIs. No git binary is needed and nothing is cloned locally.
| Provider | Public repos | Private repos | Subfolder support |
|---|---|---|---|
| GitHub | Free | PAT required | via /tree/branch/path |
| GitLab | Free | PAT required | via /-/tree/branch/path |
| Bitbucket | Free | App password required | via /src/branch/path |
Detected stacks
DockerForge automatically identifies the framework and package manager from your repo. Supported stacks include:
Private Repositories
To generate a Dockerfile for a private repo you need two things: a Personal Access Token (PAT) from your Git provider, and a DockerForge API key.
Step 1: Create a Personal Access Token
Your PAT gives DockerForge read access to your repository. It is passed directly to the provider API and is never stored on our servers.
- GitHub: Settings > Developer settings > Personal access tokens > Fine-grained. Grant read-only access to the target repo (
contents: read). - GitLab: User Settings > Access Tokens. Set the scope to
read_repository. - Bitbucket: Personal settings > App passwords. Enable the
Repositories: readpermission.
Step 2: Paste your PAT into the input card
Expand the Private Repo section and paste your token. It is used once per generation and is discarded immediately after.
Step 3: Add your DockerForge API Key
Private repo generation requires a DockerForge API key. Expand the API Key section and paste it in. See the next section for how to get one.
API Key
A DockerForge API key is required to generate Dockerfiles for private repositories. Public repos are always free and do not need a key.
This opens the account panel. Sign in with Google via OAuth. No password is needed.
Your key is generated automatically on first sign-in and shown in the panel. It starts with dkf_. Copy it and store it somewhere you can find it later.
Back on the main page, expand the API Key section in the input card, paste your key, then click Generate as normal.
Output Explained
DockerForge produces several files, each shown on its own tab. Here is what each one contains and how to use it.
Dockerfile
A multi-stage Dockerfile with named stages (deps, build, runtime). You can target individual stages using docker build --target. The final image contains only what is needed to run the app.
Dockerignore
A .dockerignore file tuned to your stack. It excludes node_modules, test directories, .env files, and build caches. Place it in the same directory as the Dockerfile.
Deploy
Shell commands for building and running your image locally. Use the copy action to get the raw command without the leading $.
Explanation
A plain-English breakdown of every decision: why that base image, what each stage does, and how the layer order is structured for Docker cache reuse.
node:alpine build stage, then the output is copied into an nginx:1.27-alpine image. The final image is much smaller than shipping a Node runtime just to serve static files.
Monorepos
DockerForge detects multi-service repositories and generates a separate, correctly scoped Dockerfile for each service found.
How service detection works
A directory is treated as an independent service only when it satisfies all three conditions:
- It contains a manifest file:
package.json,requirements.txt, or a.csproj - A lockfile is reachable from the service directory or the project root
- It is not inside a test directory such as
__tests__,fixtures, ortest
Yarn and pnpm workspaces
Workspace monorepos are fully supported. DockerForge copies lockfiles and package.json files for every workspace member before running install, so hoisted dev dependencies resolve correctly at build time.
Targeting a single service
To generate a Dockerfile for one specific service inside a monorepo, add the subfolder path to the URL:
# whole monorepo: generates one Dockerfile per service https://github.com/org/monorepo # just the API service https://github.com/org/monorepo/tree/main/packages/api
Frequently Asked Questions
.env files and other secrets. DockerForge always uses explicit COPY statements scoped to the files that are actually needed, so nothing sensitive ends up in an image layer.https://github.com/org/repo/tree/my-branch. When no branch is specified, DockerForge uses the repository's default branch.nest start is a development command. It compiles on demand and carries the full NestJS CLI as a dependency, which has no place in a production image. DockerForge compiles your app to dist/ during the build stage and runs the compiled output directly at runtime.