01 / Overview

Getting Started

DockerForge generates production-ready Dockerfiles from any Git repository. Paste a URL, click Generate, and you get a complete, working Dockerfile in seconds. No config files required.

1
Paste your repository URL

Enter a GitHub, GitLab, or Bitbucket URL into the input field. Supports specific branches and subfolders too.

2
Click Generate

DockerForge fetches your repo structure, detects the stack, and builds a multi-stage Dockerfile optimised for production.

3
Copy and ship

Copy the Dockerfile, .dockerignore, and deploy commands. Place them in your project root and run the build.

No account needed for public repos. Just paste a URL and generate. An API key is only required when working with private repositories.
02 / Inputs

Input Methods

There are three ways to provide your project to DockerForge. Use whichever fits your workflow.

🔗 Git URL
Paste a GitHub, GitLab, or Bitbucket URL. Works for public repos with no setup required.
📦 Upload ZIP
Zip your project folder and upload it directly. Useful when the project is not hosted on a remote provider.
📋 File Tree
Paste a directory listing from find or tree. DockerForge infers the project structure from the output.

Supported URL formats

All of the following resolve correctly:

Examples
https://github.com/org/repo
https://github.com/org/repo/tree/main/packages/api
https://gitlab.com/org/repo
https://bitbucket.org/org/repo
SSH URLs are not supported. Convert git@github.com:org/repo.git to the HTTPS equivalent before pasting.

Hints (optional)

Expand the Hints panel in the input card to provide extra context such as your Node version, exposed ports, or environment variable names. Hints override auto-detection when you know your setup better than the analyser does.

03 / Providers

Supported Repositories

DockerForge reads repositories directly via provider APIs. No git binary is needed and nothing is cloned locally.

Provider Public repos Private repos Subfolder support
GitHub Free PAT required via /tree/branch/path
GitLab Free PAT required via /-/tree/branch/path
Bitbucket Free App password required via /src/branch/path

Detected stacks

DockerForge automatically identifies the framework and package manager from your repo. Supported stacks include:

Node.js Python NestJS Next.js Vite React (CRA) .NET Yarn Workspaces pnpm
04 / Authentication

Private Repositories

To generate a Dockerfile for a private repo you need two things: a Personal Access Token (PAT) from your Git provider, and a DockerForge API key.

Step 1: Create a Personal Access Token

Your PAT gives DockerForge read access to your repository. It is passed directly to the provider API and is never stored on our servers.

  • GitHub: Settings > Developer settings > Personal access tokens > Fine-grained. Grant read-only access to the target repo (contents: read).
  • GitLab: User Settings > Access Tokens. Set the scope to read_repository.
  • Bitbucket: Personal settings > App passwords. Enable the Repositories: read permission.

Step 2: Paste your PAT into the input card

Expand the Private Repo section and paste your token. It is used once per generation and is discarded immediately after.

Step 3: Add your DockerForge API Key

Private repo generation requires a DockerForge API key. Expand the API Key section and paste it in. See the next section for how to get one.

Use the minimum scope. Create your token with read-only repository access. Write, admin, and webhook permissions are not needed and should not be granted.
05 / Access

API Key

A DockerForge API key is required to generate Dockerfiles for private repositories. Public repos are always free and do not need a key.

1
Click "Get API Key" in the top nav

This opens the account panel. Sign in with Google via OAuth. No password is needed.

2
Copy your key from the panel

Your key is generated automatically on first sign-in and shown in the panel. It starts with dkf_. Copy it and store it somewhere you can find it later.

3
Paste it when generating from a private repo

Back on the main page, expand the API Key section in the input card, paste your key, then click Generate as normal.

Rate limits: Public repo generation allows up to 10 Dockerfiles per hour per IP address. Authenticated requests with an API key have a higher quota tied to your account tier.
06 / Output

Output Explained

DockerForge produces several files, each shown on its own tab. Here is what each one contains and how to use it.

Dockerfile
Dockerignore
Deploy
Explanation
# stage 1: install production dependencies FROM node:20-alpine AS deps WORKDIR /build COPY package.json package-lock.json ./ RUN npm ci --omit=dev # stage 2: compile source FROM node:20-alpine AS build WORKDIR /build COPY --from=deps /build/node_modules ./node_modules COPY src/ ./src COPY tsconfig.json ./ RUN npm run build # stage 3: minimal runtime image FROM node:20-alpine AS runtime WORKDIR /app COPY --from=build /build/dist ./dist COPY --from=deps /build/node_modules ./node_modules EXPOSE 3000 CMD ["node", "dist/main.js"]
# dependencies and build output node_modules/ dist/ # environment and secrets .env .env.* *.pem # test directories __tests__/ coverage/ *.test.js *.spec.js # version control and editor .git/ .gitignore .DS_Store *.log
# build the image $ docker build -t my-app . # run locally on port 3000 $ docker run --rm -p 3000:3000 my-app # run with an env file $ docker run --rm -p 3000:3000 --env-file .env my-app

Dockerfile

A multi-stage Dockerfile with named stages (deps, build, runtime). You can target individual stages using docker build --target. The final image contains only what is needed to run the app.

Dockerignore

A .dockerignore file tuned to your stack. It excludes node_modules, test directories, .env files, and build caches. Place it in the same directory as the Dockerfile.

Deploy

Shell commands for building and running your image locally. Use the copy action to get the raw command without the leading $.

Explanation

A plain-English breakdown of every decision: why that base image, what each stage does, and how the layer order is structured for Docker cache reuse.

Frontend projects use nginx as the serve stage. Vite, CRA, and static frontend repos compile in a node:alpine build stage, then the output is copied into an nginx:1.27-alpine image. The final image is much smaller than shipping a Node runtime just to serve static files.
07 / Monorepos

Monorepos

DockerForge detects multi-service repositories and generates a separate, correctly scoped Dockerfile for each service found.

How service detection works

A directory is treated as an independent service only when it satisfies all three conditions:

  • It contains a manifest file: package.json, requirements.txt, or a .csproj
  • A lockfile is reachable from the service directory or the project root
  • It is not inside a test directory such as __tests__, fixtures, or test

Yarn and pnpm workspaces

Workspace monorepos are fully supported. DockerForge copies lockfiles and package.json files for every workspace member before running install, so hoisted dev dependencies resolve correctly at build time.

Targeting a single service

To generate a Dockerfile for one specific service inside a monorepo, add the subfolder path to the URL:

Subfolder targeting
# whole monorepo: generates one Dockerfile per service
https://github.com/org/monorepo

# just the API service
https://github.com/org/monorepo/tree/main/packages/api
The root package.json is skipped when subdirectory services are detected. It is treated as a workspace config rather than a runnable service.
08 / FAQ

Frequently Asked Questions

Does DockerForge store my PAT or source code?
No. Your Personal Access Token is used once to fetch the repo structure and is never written to disk or logged. Source files are processed in a temporary workspace and deleted after the Dockerfile is generated.
Why does the generated Dockerfile never use COPY . .?
A wildcard copy sends everything in the build context into the image, including .env files and other secrets. DockerForge always uses explicit COPY statements scoped to the files that are actually needed, so nothing sensitive ends up in an image layer.
My repo has an unusual structure. Will it still work?
Usually yes. Use the Hints panel to supply extra context such as your runtime version, ports, or a custom start command. Hints take priority over auto-detection.
How do I target a specific branch?
Include the branch in the URL: https://github.com/org/repo/tree/my-branch. When no branch is specified, DockerForge uses the repository's default branch.
Why does my NestJS app use node dist/main.js instead of nest start?
nest start is a development command. It compiles on demand and carries the full NestJS CLI as a dependency, which has no place in a production image. DockerForge compiles your app to dist/ during the build stage and runs the compiled output directly at runtime.
Can I use this for local projects that are not on GitHub?
Yes. Use the Upload ZIP or File Tree input methods. ZIP upload accepts a standard archive of your project. File Tree accepts a directory listing pasted as plain text.
I hit a rate limit. What now?
The free tier allows 10 generations per hour per IP address. The limit resets at the top of the next hour. For a higher quota, sign in and use an API key.