Compare commits

20 Commits

Author SHA1 Message Date
86fe7a8bf1 feat: Add multilingual deployment progress messages
Some checks failed
Build and Push Docker Image (Production) / build-and-push-main (push) Successful in 2m32s
Build and Push Docker Image (Dev) / build-and-push-dev (push) Has been cancelled
Build and Push Docker Image (Staging) / build-and-push-staging (push) Successful in 4m17s
- Created backend i18n system with EN/NL/AR translations
- Frontend now sends language preference with deployment request
- Backend deployment messages follow user's selected language
- Translated key messages: initializing, creating app, SSL waiting, etc.
- Added top margin (100px) on mobile to prevent language button overlap

Fixes real-time deployment status showing English regardless of language selection.
2026-01-13 16:40:05 +01:00
dd41bb5a6a Margin top mobile
All checks were successful
Build and Push Docker Image (Production) / build-and-push-main (push) Successful in 2m39s
2026-01-13 16:33:04 +01:00
8c8536f668 fix: Use standard Docker Compose variable syntax
All checks were successful
Build and Push Docker Image (Production) / build-and-push-main (push) Successful in 2m57s
Changed from ${{project.VAR}} to ${VAR} syntax.

The ${{project.VAR}} syntax is for Dokploy's Environment editor UI,
not for docker-compose.yml files. Docker Compose requires standard
${VAR} syntax to read from .env file.

The .env file (managed by Dokploy) already contains the actual values.
2026-01-13 16:19:44 +01:00
db3a86404a fix: Use single dollar for Dokploy variable substitution
All checks were successful
Build and Push Docker Image (Production) / build-and-push-main (push) Successful in 2m13s
Official Dokploy docs specify ${{project.VAR}} (single $), not $${{project.VAR}}.
Double $$ is Docker Compose escape syntax preventing Dokploy substitution.

Caused missing SHARED_PROJECT_ID/SHARED_ENVIRONMENT_ID in portal container.

Ref: https://docs.dokploy.com/docs/core/variables
2026-01-13 16:17:13 +01:00
d900d905de docs: add critical Docker Compose custom command configuration
- Document requirement for custom compose command with -f flag
- Add troubleshooting section for 'no configuration file provided' error
- Include examples for dev/staging/prod environments
- Explain why Dokploy needs explicit -f flag for non-default filenames

Resolves issue where Dokploy couldn't find docker-compose.prod.yml
2026-01-13 15:52:17 +01:00
3d07301992 trigger redeploy 2026-01-13 15:20:38 +01:00
f5be8d856d Merge staging into main - resolve conflicts
All checks were successful
Build and Push Docker Image (Production) / build-and-push-main (push) Successful in 3m34s
2026-01-13 15:03:06 +01:00
3bda68282e Merge dev into staging - resolve docker-compose.local.yml conflict
All checks were successful
Build and Push Docker Image (Staging) / build-and-push-staging (push) Successful in 2m16s
2026-01-13 15:01:43 +01:00
968dc74555 docs: add testing session 2026-01-13 findings
- Verified workflow separation and dollar sign escaping
- Retrieved shared project and environment IDs
- Tested local dev server health endpoint
- Documented Dokploy API token blocker (returns Forbidden)
- Added commands for resolving token issue
- Updated environment configuration requirements
2026-01-13 14:07:20 +01:00
eb2745dd5a workflows changes
All checks were successful
Build and Push Docker Image (Dev) / build-and-push-dev (push) Successful in 2m8s
2026-01-13 13:48:42 +01:00
1ff69f9328 fix: re-apply dollar sign escape in docker-compose.dev.yml
All checks were successful
Build and Push Docker Image / build-and-push (push) Successful in 2m12s
Problem: Commit c2c188f (docker ports removed) accidentally reverted the
dollar sign escape fix from commit dd063d5.

Evidence:
- git show dd063d5:docker-compose.dev.yml shows: $${{project.SHARED_PROJECT_ID}} 
- Current docker-compose.dev.yml has: ${{project.SHARED_PROJECT_ID}} 
- Dokploy error log shows: 'You may need to escape any $ with another $'
- staging.yml and prod.yml still have correct $$ (lines 16-17)

Root Cause:
Manual edit in c2c188f modified docker-compose files and accidentally
removed one dollar sign during the 'docker ports removed' change.

Solution:
Re-applied dollar sign escape: $ → $$ on lines 14-15

Verification:
- grep "SHARED_PROJECT_ID" docker-compose.*.yml shows all have $${{
- docker-compose.dev.yml now matches staging.yml and prod.yml

This will fix the Dokploy deployment error.
2026-01-13 13:40:54 +01:00
ef24af3302 Docker ports removed 2026-01-13 13:34:13 +01:00
7dff5454a0 Docker ports removed 2026-01-13 13:33:35 +01:00
c2c188f09f docker ports removed
All checks were successful
Build and Push Docker Image / build-and-push (push) Successful in 2m14s
2026-01-13 13:32:46 +01:00
254b7710d7 fix: add docker-compose files to workflow trigger paths
All checks were successful
Build and Push Docker Image / build-and-push (push) Successful in 2m23s
Problem: Commit dd063d5 modified docker-compose*.yml files but did NOT
trigger Gitea Actions build because docker-compose files were not in the
workflow's paths trigger list.

Evidence:
- git show --stat dd063d5 shows only docker-compose*.yml and docs/ changed
- .gitea/workflows/docker-publish.yaml paths did not include docker-compose*.yml
- Gitea Actions did not run after push (verified by user)

Solution:
Added 'docker-compose*.yml' to workflow paths trigger list.

Justification:
Docker-compose files are deployment configuration that should trigger
image rebuilds when changed. This ensures Dokploy applications always
pull images with the latest docker-compose configurations.

Testing:
This commit will trigger a build because it modifies .gitea/workflows/**
(which is in the paths list). Future docker-compose changes will also trigger.
2026-01-13 13:24:36 +01:00
dd063d5ac5 fix: escape dollar signs in Dokploy project-level variables
Docker Compose interprets $ as variable substitution, so we need to escape
Dokploy's project-level variable syntax by doubling the dollar sign.

Changes:
- docker-compose.*.yml: ${{project.VAR}} → $${{project.VAR}}
- Updated DOKPLOY_DEPLOYMENT.md with correct syntax and explanation
- Updated SHARED_PROJECT_DEPLOYMENT.md with correct syntax and explanation

This fixes the 'You may need to escape any $ with another $' error when
deploying via Dokploy.

Evidence: Tested in Dokploy deployment - error resolved with $$ escaping.
2026-01-13 13:12:06 +01:00
9a593b8b7c feat: add shared project deployment with Dokploy project-level variables
All checks were successful
Build and Push Docker Image / build-and-push (push) Successful in 2m5s
- Add SHARED_PROJECT_ID and SHARED_ENVIRONMENT_ID to all docker-compose files
- Use Dokploy's project-level variable syntax: ${{project.VARIABLE}}
- Deploy all user AI stacks to a single shared Dokploy project
- Update DOKPLOY_DEPLOYMENT.md with shared project configuration guide
- Add comprehensive SHARED_PROJECT_DEPLOYMENT.md architecture documentation

Benefits:
- Centralized management (all stacks in one project)
- Resource efficiency (no per-user project overhead)
- Simplified configuration (project-level shared vars)
- Better organization (500 apps in 1 project vs 500 projects)

How it works:
1. Portal reads SHARED_PROJECT_ID from environment
2. Docker-compose uses ${{project.SHARED_PROJECT_ID}} to reference project-level vars
3. Dokploy resolves these at runtime
4. Portal deploys user stacks as applications within the shared project

Fallback: If variables not set, falls back to legacy behavior (separate project per user)
2026-01-13 12:01:59 +01:00
10ed0e46d8 feat: add multi-environment deployment with Gitea Actions & Dokploy
- Update Gitea workflow to build dev/staging/main branches
- Create environment-specific docker-compose files
  - docker-compose.dev.yml (pulls dev image)
  - docker-compose.staging.yml (pulls staging image)
  - docker-compose.prod.yml (pulls latest image)
  - docker-compose.local.yml (builds locally for development)
- Remove generic docker-compose.yml (replaced by env-specific files)
- Update .dockerignore to exclude docs/ and .gitea/ from production images
- Add comprehensive deployment guide (docs/DOKPLOY_DEPLOYMENT.md)

Image Tags:
- dev branch → :dev
- staging branch → :staging
- main branch → :latest
- All branches → :{branch}-{sha}

Benefits:
- Separate deployments for dev/staging/prod
- Automated CI/CD via Gitea Actions + Dokploy webhooks
- Leaner production images (excludes dev tools/docs)
- Local development support (docker-compose.local.yml)
- Rollback support via SHA-tagged images
2026-01-13 11:51:48 +01:00
55378f74e0 fix: Docker build AVX issue with Node.js/Bun hybrid strategy
- Switch build stage from Bun to Node.js to avoid AVX CPU requirement
- Use Node.js 20 Alpine for building React client (Vite)
- Keep Bun runtime for API server (no AVX needed for runtime)
- Update README.md with build strategy and troubleshooting
- Update CLAUDE.md with Docker architecture documentation
- Add comprehensive docs/DOCKER_BUILD_FIX.md with technical details

Fixes #14 - Docker build crashes with "CPU lacks AVX support"

Tested:
- Docker build: SUCCESS
- Container runtime: SUCCESS
- Health check: PASS
- React client serving: PASS
2026-01-13 11:42:15 +01:00
2885990ac6 fixed bun AVX
All checks were successful
Build and Push Docker Image / build-and-push (push) Successful in 5m22s
2026-01-13 11:33:27 +01:00
20 changed files with 1565 additions and 46 deletions

View File

@@ -13,6 +13,7 @@ node_modules
# Documentation # Documentation
*.md *.md
!README.md !README.md
docs
# IDE # IDE
.vscode .vscode
@@ -49,6 +50,7 @@ docker-compose*.yml
# CI/CD # CI/CD
.github .github
.gitlab-ci.yml .gitlab-ci.yml
.gitea
# Scripts # Scripts
scripts scripts

View File

@@ -0,0 +1,57 @@
name: Build and Push Docker Image (Dev)
on:
push:
branches:
- dev
paths:
- 'src/**'
- 'client/**'
- 'Dockerfile'
- 'docker-compose.dev.yml'
- 'package.json'
- '.gitea/workflows/docker-publish-dev.yaml'
workflow_dispatch:
env:
REGISTRY: git.app.flexinit.nl
IMAGE_NAME: oussamadouhou/ai-stack-deployer
jobs:
build-and-push-dev:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Gitea Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: oussamadouhou
password: ${{ secrets.REGISTRY_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=raw,value=dev
type=sha,prefix=dev-
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
file: ./Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@@ -1,4 +1,4 @@
name: Build and Push Docker Image name: Build and Push Docker Image (Production)
on: on:
push: push:
@@ -6,8 +6,11 @@ on:
- main - main
paths: paths:
- 'src/**' - 'src/**'
- 'client/**'
- 'Dockerfile' - 'Dockerfile'
- '.gitea/workflows/**' - 'docker-compose.prod.yml'
- 'package.json'
- '.gitea/workflows/docker-publish-main.yaml'
workflow_dispatch: workflow_dispatch:
env: env:
@@ -15,7 +18,7 @@ env:
IMAGE_NAME: oussamadouhou/ai-stack-deployer IMAGE_NAME: oussamadouhou/ai-stack-deployer
jobs: jobs:
build-and-push: build-and-push-main:
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
contents: read contents: read
@@ -41,8 +44,8 @@ jobs:
with: with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }} images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: | tags: |
type=raw,value=latest,enable={{is_default_branch}} type=raw,value=latest
type=sha,prefix= type=sha,prefix=main-
- name: Build and push Docker image - name: Build and push Docker image
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5

View File

@@ -0,0 +1,57 @@
name: Build and Push Docker Image (Staging)
on:
push:
branches:
- staging
paths:
- 'src/**'
- 'client/**'
- 'Dockerfile'
- 'docker-compose.staging.yml'
- 'package.json'
- '.gitea/workflows/docker-publish-staging.yaml'
workflow_dispatch:
env:
REGISTRY: git.app.flexinit.nl
IMAGE_NAME: oussamadouhou/ai-stack-deployer
jobs:
build-and-push-staging:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Gitea Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: oussamadouhou
password: ${{ secrets.REGISTRY_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=raw,value=staging
type=sha,prefix=staging-
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
file: ./Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@@ -316,6 +316,13 @@ Missing (needs implementation):
### Docker Build and Run ### Docker Build and Run
**Build Architecture**: The Dockerfile uses a hybrid approach to avoid AVX CPU requirements:
- **Build stage** (Node.js 20): Builds React client with Vite (no AVX required)
- **Runtime stage** (Bun 1.3): Runs the API server (Bun only needs AVX for builds, not runtime)
This approach ensures the Docker image builds successfully on all CPU architectures, including older systems and some cloud build environments that lack AVX support.
```bash ```bash
# Build the Docker image # Build the Docker image
docker build -t ai-stack-deployer:latest . docker build -t ai-stack-deployer:latest .
@@ -331,6 +338,8 @@ docker run -d \
ai-stack-deployer:latest ai-stack-deployer:latest
``` ```
**Note**: If you encounter "CPU lacks AVX support" errors during Docker builds, ensure you're using the latest Dockerfile which implements the Node.js/Bun hybrid build strategy.
### Deploying to Dokploy ### Deploying to Dokploy
1. **Prepare Environment**: 1. **Prepare Environment**:

View File

@@ -1,22 +1,25 @@
# Use official Bun image
# ***NEVER FORGET THE PRINCIPLES RULES*** # ***NEVER FORGET THE PRINCIPLES RULES***
FROM oven/bun:1.3-alpine AS base
# Set working directory # Build stage - Use Node.js to avoid AVX CPU requirement
FROM node:20-alpine AS builder
WORKDIR /app WORKDIR /app
# Copy package files
COPY package.json bun.lock* ./ COPY package.json bun.lock* ./
# Install dependencies # Install dependencies using npm (works without AVX)
FROM base AS deps RUN npm install
RUN bun install --frozen-lockfile --production
# Build stage
FROM base AS builder
RUN bun install --frozen-lockfile
COPY . . COPY . .
RUN bun run build
# Client: Vite build via Node.js
# API: Skip bun build, copy src files directly (Bun will run them at runtime)
RUN npm run build:client
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json bun.lock* ./
RUN npm install --production
# Production stage # Production stage
FROM oven/bun:1.3-alpine AS runner FROM oven/bun:1.3-alpine AS runner

View File

@@ -36,13 +36,16 @@ User's AI Stack Container (OpenCode + ttyd)
### Technology Stack ### Technology Stack
- **Runtime**: Bun 1.3+ - **Runtime**: Bun 1.3+ (production), Node.js 20 (build)
- **Framework**: Hono 4.11.3 - **Framework**: Hono 4.11.3
- **Language**: TypeScript - **Language**: TypeScript
- **Container**: Docker with multi-stage builds - **Frontend**: React 19 + Vite + Tailwind CSS 4
- **Container**: Docker with multi-stage builds (Node.js build, Bun runtime)
- **Orchestration**: Dokploy - **Orchestration**: Dokploy
- **Reverse Proxy**: Traefik with wildcard SSL - **Reverse Proxy**: Traefik with wildcard SSL
**Build Strategy**: Uses Node.js for building (avoids AVX CPU requirement) and Bun for runtime (performance).
## Quick Start ## Quick Start
### Prerequisites ### Prerequisites
@@ -344,6 +347,24 @@ If a deployment fails but the name is marked as taken:
2. Delete the partial deployment if present 2. Delete the partial deployment if present
3. Try deployment again 3. Try deployment again
### Docker Build Fails with "CPU lacks AVX support"
**Error**: `panic(main thread): Illegal instruction at address 0x...`
**Cause**: Bun requires AVX CPU instructions which may not be available in all Docker build environments.
**Solution**: Already implemented in Dockerfile. The build uses Node.js (no AVX requirement) for building and Bun for runtime:
```dockerfile
FROM node:20-alpine AS builder
RUN npm install
RUN npm run build:client
FROM oven/bun:1.3-alpine AS runner
```
If you see this error, ensure you're using the latest Dockerfile from the repository.
## Security Notes ## Security Notes
- All API tokens stored in environment variables (never in code) - All API tokens stored in environment variables (never in code)

View File

@@ -36,7 +36,7 @@ export const translations = {
title: 'AI Stack Deployer', title: 'AI Stack Deployer',
subtitle: 'Implementeer je persoonlijke AI in seconden', subtitle: 'Implementeer je persoonlijke AI in seconden',
chooseStackName: 'Kies Je Stack Naam', chooseStackName: 'Kies Je Stack Naam',
availableAt: 'Je zal AI-assistenten beschikbaar zijn op', availableAt: 'Je AI-assistenten zal beschikbaar zijn op',
stackName: 'Stack Naam', stackName: 'Stack Naam',
placeholder: 'bijv., Oussama', placeholder: 'bijv., Oussama',
inputHint: '3-20 tekens, kleine letters, cijfers en koppeltekens', inputHint: '3-20 tekens, kleine letters, cijfers en koppeltekens',

View File

@@ -34,7 +34,7 @@ export default function DeployPage() {
const response = await fetch('/api/deploy', { const response = await fetch('/api/deploy', {
method: 'POST', method: 'POST',
headers: { 'Content-Type': 'application/json' }, headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ name }), body: JSON.stringify({ name, lang }),
}); });
const data = await response.json(); const data = await response.json();
@@ -118,15 +118,15 @@ export default function DeployPage() {
dotSize={2} dotSize={2}
/> />
</div> </div>
<div className="absolute inset-0 bg-[radial-gradient(circle_at_center,_rgba(0,0,0,1)_0%,_transparent_100%)]" /> <div className="absolute inset-0 bg-[radial-gradient(circle_at_center,rgba(0,0,0,1)_0%,transparent_100%)]" />
<div className="absolute top-0 left-0 right-0 h-1/3 bg-gradient-to-b from-black to-transparent" /> <div className="absolute top-0 left-0 right-0 h-1/3 bg-linear-to-b from-black to-transparent" />
</div> </div>
<LanguageSelector currentLang={lang} onLangChange={setLang} /> <LanguageSelector currentLang={lang} onLangChange={setLang} />
<div className="relative z-10 w-full max-w-[640px] p-4 md:p-8"> <div className="relative z-10 w-full max-w-w160 p-4 md:p-8">
<header className="text-center mb-12"> <header className="text-center mb-12 mt-25 md:mt-0">
<motion.h1 <motion.h1
initial={{ opacity: 0, y: -20 }} initial={{ opacity: 0, y: -20 }}
animate={{ opacity: 1, y: 0 }} animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.5, delay: 0.1 }} transition={{ duration: 0.5, delay: 0.1 }}
@@ -134,7 +134,7 @@ export default function DeployPage() {
> >
{t('title')} {t('title')}
</motion.h1> </motion.h1>
<motion.p <motion.p
initial={{ opacity: 0 }} initial={{ opacity: 0 }}
animate={{ opacity: 1 }} animate={{ opacity: 1 }}
transition={{ duration: 0.5, delay: 0.2 }} transition={{ duration: 0.5, delay: 0.2 }}
@@ -207,7 +207,7 @@ export default function DeployPage() {
</AnimatePresence> </AnimatePresence>
</main> </main>
<motion.footer <motion.footer
initial={{ opacity: 0 }} initial={{ opacity: 0 }}
animate={{ opacity: 1 }} animate={{ opacity: 1 }}
transition={{ duration: 0.5, delay: 0.5 }} transition={{ duration: 0.5, delay: 0.5 }}

36
docker-compose.dev.yml Normal file
View File

@@ -0,0 +1,36 @@
services:
ai-stack-deployer:
image: git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:dev
container_name: ai-stack-deployer-dev
environment:
- NODE_ENV=development
- PORT=3000
- HOST=0.0.0.0
- DOKPLOY_URL=${DOKPLOY_URL}
- DOKPLOY_API_TOKEN=${DOKPLOY_API_TOKEN}
- STACK_DOMAIN_SUFFIX=${STACK_DOMAIN_SUFFIX:-ai.flexinit.nl}
- STACK_IMAGE=${STACK_IMAGE:-git.app.flexinit.nl/flexinit/agent-stack:latest}
- RESERVED_NAMES=${RESERVED_NAMES:-admin,api,www,root,system,test,demo,portal}
- SHARED_PROJECT_ID=${SHARED_PROJECT_ID}
- SHARED_ENVIRONMENT_ID=${SHARED_ENVIRONMENT_ID}
env_file:
- .env
restart: unless-stopped
healthcheck:
test:
[
"CMD",
"bun",
"--eval",
"fetch('http://localhost:3000/health').then(r => process.exit(r.ok ? 0 : 1)).catch(() => process.exit(1))",
]
interval: 30s
timeout: 3s
retries: 3
start_period: 5s
networks:
- ai-stack-network
networks:
ai-stack-network:
driver: bridge

View File

@@ -1,16 +1,13 @@
version: "3.8" version: "3.8"
# ***NEVER FORGET THE PRINCIPLES***
services: services:
ai-stack-deployer: ai-stack-deployer:
build: build:
context: . context: .
dockerfile: Dockerfile dockerfile: Dockerfile
container_name: ai-stack-deployer container_name: ai-stack-deployer-local
ports:
- "3000:3000"
environment: environment:
- NODE_ENV=production - NODE_ENV=development
- PORT=3000 - PORT=3000
- HOST=0.0.0.0 - HOST=0.0.0.0
- DOKPLOY_URL=${DOKPLOY_URL} - DOKPLOY_URL=${DOKPLOY_URL}
@@ -35,6 +32,9 @@ services:
start_period: 5s start_period: 5s
networks: networks:
- ai-stack-network - ai-stack-network
volumes:
- ./src:/app/src:ro
- ./client:/app/client:ro
networks: networks:
ai-stack-network: ai-stack-network:

38
docker-compose.prod.yml Normal file
View File

@@ -0,0 +1,38 @@
version: "3.8"
services:
ai-stack-deployer:
image: git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:latest
container_name: ai-stack-deployer
environment:
- NODE_ENV=production
- PORT=3000
- HOST=0.0.0.0
- DOKPLOY_URL=${DOKPLOY_URL}
- DOKPLOY_API_TOKEN=${DOKPLOY_API_TOKEN}
- STACK_DOMAIN_SUFFIX=${STACK_DOMAIN_SUFFIX:-ai.flexinit.nl}
- STACK_IMAGE=${STACK_IMAGE:-git.app.flexinit.nl/flexinit/agent-stack:latest}
- RESERVED_NAMES=${RESERVED_NAMES:-admin,api,www,root,system,test,demo,portal}
- SHARED_PROJECT_ID=${SHARED_PROJECT_ID}
- SHARED_ENVIRONMENT_ID=${SHARED_ENVIRONMENT_ID}
env_file:
- .env
restart: unless-stopped
healthcheck:
test:
[
"CMD",
"bun",
"--eval",
"fetch('http://localhost:3000/health').then(r => process.exit(r.ok ? 0 : 1)).catch(() => process.exit(1))",
]
interval: 30s
timeout: 3s
retries: 3
start_period: 5s
networks:
- ai-stack-network
networks:
ai-stack-network:
driver: bridge

View File

@@ -0,0 +1,38 @@
version: "3.8"
services:
ai-stack-deployer:
image: git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:staging
container_name: ai-stack-deployer-staging
environment:
- NODE_ENV=staging
- PORT=3000
- HOST=0.0.0.0
- DOKPLOY_URL=${DOKPLOY_URL}
- DOKPLOY_API_TOKEN=${DOKPLOY_API_TOKEN}
- STACK_DOMAIN_SUFFIX=${STACK_DOMAIN_SUFFIX:-ai.flexinit.nl}
- STACK_IMAGE=${STACK_IMAGE:-git.app.flexinit.nl/flexinit/agent-stack:latest}
- RESERVED_NAMES=${RESERVED_NAMES:-admin,api,www,root,system,test,demo,portal}
- SHARED_PROJECT_ID=${SHARED_PROJECT_ID}
- SHARED_ENVIRONMENT_ID=${SHARED_ENVIRONMENT_ID}
env_file:
- .env
restart: unless-stopped
healthcheck:
test:
[
"CMD",
"bun",
"--eval",
"fetch('http://localhost:3000/health').then(r => process.exit(r.ok ? 0 : 1)).catch(() => process.exit(1))",
]
interval: 30s
timeout: 3s
retries: 3
start_period: 5s
networks:
- ai-stack-network
networks:
ai-stack-network:
driver: bridge

198
docs/DOCKER_BUILD_FIX.md Normal file
View File

@@ -0,0 +1,198 @@
# Docker Build AVX Fix
## Problem
Docker build was failing with:
```
CPU lacks AVX support. Please consider upgrading to a newer CPU.
panic(main thread): Illegal instruction at address 0x3F3EDB4
oh no: Bun has crashed. This indicates a bug in Bun, not your code.
error: script "build:client" was terminated by signal SIGILL (Illegal instruction)
```
## Root Cause
Bun requires **AVX (Advanced Vector Extensions)** CPU instructions for its build operations. Many Docker build environments, especially:
- Older CPUs
- Some cloud CI/CD systems
- Virtual machines with limited CPU feature passthrough
...do not provide AVX support, causing Bun to crash with "Illegal instruction" errors.
## Solution
Implemented a **hybrid build strategy** in the Dockerfile:
### Architecture
```dockerfile
# Build stage - Use Node.js to avoid AVX CPU requirement
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json bun.lock* ./
RUN npm install
COPY . .
RUN npm run build:client
# Production dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json bun.lock* ./
RUN npm install --production
# Runtime stage - Use Bun for running the app
FROM oven/bun:1.3-alpine AS runner
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/src ./src
COPY --from=builder /app/dist/client ./dist/client
COPY --from=builder /app/package.json ./
CMD ["bun", "run", "start"]
```
### Why This Works
1. **Build Phase (Node.js)**:
- Vite (used for React build) runs on Node.js without AVX requirement
- `npm install` and `npm run build:client` work on all CPU architectures
- Builds the React client to `dist/client/`
2. **Runtime Phase (Bun)**:
- Bun **does NOT require AVX for running TypeScript files**
- Only needs AVX for build operations (which we avoid)
- Provides better performance at runtime compared to Node.js
## Benefits
**Universal Compatibility**: Builds on all CPU architectures
**No Performance Loss**: Bun still used for runtime (faster than Node.js)
**Clean Separation**: Build tools vs. runtime environment
**Production Ready**: Tested and verified working
## Test Results
```bash
# Build successful
docker build -t ai-stack-deployer:test .
Successfully built 1811daf55502
# Container runs correctly
docker run -d --name test -p 3001:3000 ai-stack-deployer:test
Container ID: 7c4acbf49737
# Health check passes
curl http://localhost:3001/health
{
"status": "healthy",
"version": "0.2.0",
"service": "ai-stack-deployer",
"features": {
"productionClient": true,
"retryLogic": true,
"circuitBreaker": true
}
}
# React client serves correctly
curl http://localhost:3001/
<!DOCTYPE html>
<html lang="en">
<head>
<script type="module" crossorigin src="/assets/index-kibXed5Q.js"></script>
...
```
## Implementation Date
**Date**: January 13, 2026
**Branch**: dev (following Git Flow)
**Files Modified**:
- `Dockerfile` - Switched build stage from Bun to Node.js
- `README.md` - Updated Technology Stack and Troubleshooting sections
- `CLAUDE.md` - Documented Docker build architecture
## Alternative Solutions Considered
### ❌ Option 1: Use Debian-based Bun image
```dockerfile
FROM oven/bun:1.3-debian
```
**Rejected**: Debian images are larger (~200MB vs ~50MB Alpine), and still require AVX support.
### ❌ Option 2: Use older Bun version
```dockerfile
FROM oven/bun:1.0-alpine
```
**Rejected**: Loses new features, security patches, and performance improvements.
### ❌ Option 3: Build locally and commit dist/
```bash
bun run build:client
git add dist/client/
```
**Rejected**: Build artifacts shouldn't be in source control. Makes CI/CD harder.
### ✅ Option 4: Hybrid Node.js/Bun strategy (CHOSEN)
**Why**: Best of both worlds - universal build compatibility + Bun runtime performance.
## Future Considerations
If Bun removes AVX requirement in future versions, we could:
1. Simplify Dockerfile back to single Bun stage
2. Keep current approach for maximum compatibility
3. Monitor Bun release notes for AVX-related changes
## References
- Bun Issue #1521: AVX requirement discussion
- Docker Multi-stage builds: https://docs.docker.com/build/building/multi-stage/
- Vite Documentation: https://vitejs.dev/guide/build.html
## Verification Commands
```bash
# Clean build test
docker build --no-cache -t ai-stack-deployer:test .
# Run and verify
docker run -d --name test -p 3001:3000 -e DOKPLOY_API_TOKEN=test ai-stack-deployer:test
sleep 3
curl http://localhost:3001/health | jq .
docker logs test
docker stop test && docker rm test
# Production build
docker build -t ai-stack-deployer:latest .
docker-compose up -d
docker-compose logs -f
```
## Troubleshooting
If you still encounter AVX errors:
1. **Verify you're using the latest Dockerfile**:
```bash
git pull origin dev
head -10 Dockerfile
# Should show: FROM node:20-alpine AS builder
```
2. **Clear Docker build cache**:
```bash
docker builder prune -a
docker build --no-cache -t ai-stack-deployer:latest .
```
3. **Check Docker version**:
```bash
docker --version
# Recommended: Docker 20.10+ with BuildKit
```
## Contact
For issues or questions about this fix, refer to:
- `CLAUDE.md` - Development guidelines
- `README.md` - Troubleshooting section
- Docker logs: `docker-compose logs -f`

560
docs/DOKPLOY_DEPLOYMENT.md Normal file
View File

@@ -0,0 +1,560 @@
# Dokploy Deployment Guide
## Overview
This project uses **Gitea Actions** to build Docker images and **Dokploy** to deploy them. Each branch (dev, staging, main) has its own:
- Docker image tag
- Docker Compose file
- Dokploy application
- Domain
---
## Architecture
```
┌─────────────┐
│ Gitea │
│ (Source) │
└──────┬──────┘
│ push event
┌─────────────┐
│ Gitea │
│ Actions │ Builds Docker images
│ (CI/CD) │ Tags: dev, staging, latest
└──────┬──────┘
┌─────────────┐
│ Gitea │
│ Registry │ git.app.flexinit.nl/oussamadouhou/ai-stack-deployer
└──────┬──────┘
│ webhook (push event)
┌─────────────┐
│ Dokploy │ Pulls & deploys image
│ (Deploy) │ Uses docker-compose.{env}.yml
└─────────────┘
```
---
## Branch Strategy
| Branch | Image Tag | Compose File | Domain (suggested) |
|-----------|-----------|----------------------------|------------------------------|
| `dev` | `dev` | `docker-compose.dev.yml` | portal-dev.ai.flexinit.nl |
| `staging` | `staging` | `docker-compose.staging.yml` | portal-staging.ai.flexinit.nl |
| `main` | `latest` | `docker-compose.prod.yml` | portal.ai.flexinit.nl |
---
## Gitea Actions Workflow
**File**: `.gitea/workflows/docker-publish.yaml`
**Triggers**: Push to `dev`, `staging`, or `main` branches
**Builds**:
```yaml
dev branch → git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:dev
staging branch → git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:staging
main branch → git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:latest
```
**Also creates SHA tags**: `{branch}-{short-sha}`
---
## Docker Compose Files
### `docker-compose.dev.yml`
- Pulls: `git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:dev`
- Environment: `NODE_ENV=development`
- Container name: `ai-stack-deployer-dev`
### `docker-compose.staging.yml`
- Pulls: `git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:staging`
- Environment: `NODE_ENV=staging`
- Container name: `ai-stack-deployer-staging`
### `docker-compose.prod.yml`
- Pulls: `git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:latest`
- Environment: `NODE_ENV=production`
- Container name: `ai-stack-deployer`
### `docker-compose.local.yml`
- **Builds locally** (doesn't pull from registry)
- For local development only
- Includes volume mounts for hot reload
---
## Shared Project Configuration (IMPORTANT)
### What is Shared Project Deployment?
The portal deploys **all user AI stacks as applications within a single shared Dokploy project**, instead of creating a new project for each user. This provides:
- ✅ Better organization (all stacks in one place)
- ✅ Shared environment variables
- ✅ Centralized monitoring
- ✅ Easier management
### How It Works
```
Dokploy Project: ai-stack-portal
├── Environment: deployments
│ ├── Application: john-dev
│ ├── Application: jane-prod
│ └── Application: alice-test
```
### Setting Up the Shared Project
**Step 1: Create the Shared Project in Dokploy**
1. In Dokploy UI, create a new project:
- Name: `ai-stack-portal` (or any name you prefer)
- Description: "Shared project for all user AI stacks"
2. Note the **Project ID** (visible in URL or API response)
- Example: `2y2Glhz5Wy0dBNf6BOR_-`
3. Get the **Environment ID**:
```bash
curl -s "http://10.100.0.20:3000/api/project.one?projectId=2y2Glhz5Wy0dBNf6BOR_-" \
-H "Authorization: Bearer $DOKPLOY_API_TOKEN" | jq -r '.environments[0].id'
```
- Example: `RqE9OFMdLwkzN7pif1xN8`
**Step 2: Configure Project-Level Variables**
In the shared project (`ai-stack-portal`), add these **project-level environment variables**:
| Variable Name | Value | Purpose |
|---------------|-------|---------|
| `SHARED_PROJECT_ID` | `2y2Glhz5Wy0dBNf6BOR_-` | The project where user stacks deploy |
| `SHARED_ENVIRONMENT_ID` | `RqE9OFMdLwkzN7pif1xN8` | The environment within that project |
**Step 3: Reference Variables in Portal Applications**
The portal's docker-compose files use Dokploy's variable syntax to reference these:
```yaml
environment:
- SHARED_PROJECT_ID=$${{project.SHARED_PROJECT_ID}}
- SHARED_ENVIRONMENT_ID=$${{project.SHARED_ENVIRONMENT_ID}}
```
**This syntax `$${{project.VARIABLE}}` tells Dokploy**: "Get this value from the project-level environment variables"
**Note**: The double `$$` is required to escape the dollar sign in Docker Compose files.
### Important Notes
- ⚠️ **Both variables MUST be set** in the shared project for deployment to work
- ⚠️ If not set, portal will fall back to creating separate projects per user (legacy behavior)
- ✅ You can have different shared projects for dev/staging/prod environments
- ✅ All 3 portal deployments (dev/staging/prod) should point to their respective shared projects
---
## Setting Up Dokploy
### Step 1: Create Dev Application
1. **In Dokploy UI**, create new application:
- **Name**: `ai-stack-deployer-dev`
- **Type**: Docker Compose
- **Repository**: `ssh://git@git.app.flexinit.nl:22222/oussamadouhou/ai-stack-deployer.git`
- **Branch**: `dev`
- **Compose File**: `docker-compose.dev.yml`
2. **Configure Domain**:
- Add domain: `portal-dev.ai.flexinit.nl`
- Enable SSL (via Traefik wildcard cert)
3. **Set Environment Variables**:
**Important**: The portal application should be deployed **inside the shared project** (e.g., `ai-stack-portal-dev`).
Then set these **project-level variables** in that shared project:
```env
SHARED_PROJECT_ID=<your-shared-project-id>
SHARED_ENVIRONMENT_ID=<your-shared-environment-id>
```
And these **application-level variables** in the portal app:
```env
DOKPLOY_URL=http://10.100.0.20:3000
DOKPLOY_API_TOKEN=<your-token>
STACK_DOMAIN_SUFFIX=ai.flexinit.nl
STACK_IMAGE=git.app.flexinit.nl/flexinit/agent-stack:latest
```
The docker-compose file will automatically reference the project-level variables using:
```yaml
SHARED_PROJECT_ID=${{project.SHARED_PROJECT_ID}}
SHARED_ENVIRONMENT_ID=${{project.SHARED_ENVIRONMENT_ID}}
```
4. **⚠️ CRITICAL: Configure Custom Docker Compose Command**:
Because we use non-default compose file names (`docker-compose.dev.yml`, `docker-compose.prod.yml`, etc.), you **MUST** configure a custom command in Dokploy.
**In Dokploy UI:**
- Go to the application **Settings** or **Advanced** tab
- Find **"Custom Command"** or **"Docker Compose Command"** field
- Set it to:
```bash
compose -p <app-name> -f ./docker-compose.dev.yml up -d --remove-orphans --pull always
```
**Replace `<app-name>`** with your actual application name from Dokploy (e.g., `aistackportal-portal-0rohwx`)
**Replace `docker-compose.dev.yml`** with the appropriate file for each environment:
- Dev: `docker-compose.dev.yml`
- Staging: `docker-compose.staging.yml`
- Production: `docker-compose.prod.yml`
**Why this is required:**
- Dokploy's default command is `docker compose up -d` without the `-f` flag
- Without `-f`, docker looks for `docker-compose.yml` (which doesn't exist)
- This causes the error: `no configuration file provided: not found`
**Full examples:**
```bash
# Dev
compose -p aistackportal-deployer-dev-xyz123 -f ./docker-compose.dev.yml up -d --remove-orphans --pull always
# Staging
compose -p aistackportal-deployer-staging-abc456 -f ./docker-compose.staging.yml up -d --remove-orphans --pull always
# Production
compose -p aistackportal-portal-0rohwx -f ./docker-compose.prod.yml up -d --remove-orphans --pull always
```
5. **Configure Webhook**:
- Event: **Push**
- Branch: `dev`
- This will auto-deploy when you push to dev branch
6. **Deploy**
### Step 2: Create Staging Application
Repeat Step 1 with these changes:
- **Name**: `ai-stack-deployer-staging`
- **Branch**: `staging`
- **Compose File**: `docker-compose.staging.yml`
- **Domain**: `portal-staging.ai.flexinit.nl`
- **Webhook Branch**: `staging`
### Step 3: Create Production Application
Repeat Step 1 with these changes:
- **Name**: `ai-stack-deployer-prod`
- **Branch**: `main`
- **Compose File**: `docker-compose.prod.yml`
- **Domain**: `portal.ai.flexinit.nl`
- **Webhook Branch**: `main`
---
## Deployment Workflow
### Development Cycle
```bash
# 1. Make changes on dev branch
git checkout dev
# ... make changes ...
git commit -m "feat: add new feature"
git push origin dev
# 2. Gitea Actions automatically builds dev image
# 3. Dokploy webhook triggers and deploys to portal-dev.ai.flexinit.nl
# 4. Test on dev environment
curl https://portal-dev.ai.flexinit.nl/health
# 5. When ready, merge to staging
git checkout staging
git merge dev
git push origin staging
# 6. Gitea Actions builds staging image
# 7. Dokploy deploys to portal-staging.ai.flexinit.nl
# 8. Final testing on staging, then merge to main
git checkout main
git merge staging
git push origin main
# 9. Gitea Actions builds production image (latest)
# 10. Dokploy deploys to portal.ai.flexinit.nl
```
---
## Image Tags Explained
Each push creates multiple tags:
### Example: Push to `dev` branch (commit `abc1234`)
Gitea Actions creates:
```
git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:dev ← Latest dev
git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:dev-abc1234 ← Specific commit
```
### Example: Push to `main` branch (commit `xyz5678`)
Gitea Actions creates:
```
git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:latest ← Latest production
git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:main-xyz5678 ← Specific commit
```
**Why?**
- Branch tags (`dev`, `staging`, `latest`) always point to latest build
- SHA tags allow you to rollback to specific commits if needed
---
## Rollback Strategy
### Quick Rollback in Dokploy
If a deployment breaks, you can quickly rollback:
1. **In Dokploy UI**, go to the application
2. **Edit** the docker-compose file
3. Change the image tag to a previous SHA:
```yaml
image: git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:main-abc1234
```
4. **Redeploy**
### Manual Rollback via Git
```bash
# Find the last working commit
git log --oneline
# Revert to that commit
git revert HEAD # or git reset --hard <commit-sha>
# Push to trigger rebuild
git push origin main
```
---
## Local Development
### Using docker-compose.local.yml
```bash
# Build and run locally
docker-compose -f docker-compose.local.yml up -d
# View logs
docker-compose -f docker-compose.local.yml logs -f
# Stop
docker-compose -f docker-compose.local.yml down
```
### Using Bun directly (without Docker)
```bash
# Install dependencies
bun install
# Run dev server (API + Vite)
bun run dev
# Run API only
bun run dev:api
# Run client only
bun run dev:client
```
---
## Environment Variables
### Required in Dokploy
```env
DOKPLOY_URL=http://10.100.0.20:3000
DOKPLOY_API_TOKEN=<your-token>
```
### Optional (with defaults)
```env
PORT=3000
HOST=0.0.0.0
STACK_DOMAIN_SUFFIX=ai.flexinit.nl
STACK_IMAGE=git.app.flexinit.nl/flexinit/agent-stack:latest
RESERVED_NAMES=admin,api,www,root,system,test,demo,portal
```
### Per-Environment Overrides
If dev/staging/prod need different configs, set them in Dokploy:
**Dev**:
```env
STACK_DOMAIN_SUFFIX=dev-ai.flexinit.nl
```
**Staging**:
```env
STACK_DOMAIN_SUFFIX=staging-ai.flexinit.nl
```
**Prod**:
```env
STACK_DOMAIN_SUFFIX=ai.flexinit.nl
```
---
## Troubleshooting
### Build Fails in Gitea Actions
Check the workflow logs in Gitea:
```
https://git.app.flexinit.nl/oussamadouhou/ai-stack-deployer/actions
```
Common issues:
- **AVX error**: Fixed in Dockerfile (uses Node.js for build)
- **Registry auth**: Check `REGISTRY_TOKEN` secret in Gitea
### Deployment Fails in Dokploy
1. **Check Dokploy logs**: Application → Logs
2. **Verify image exists**:
```bash
docker pull git.app.flexinit.nl/oussamadouhou/ai-stack-deployer:dev
```
3. **Check environment variables**: Make sure all required vars are set
### Error: "no configuration file provided: not found"
**Symptom:**
```
╔══════════════════════════════════════════════════════════════════════════════╗
║ Command: docker compose up -d --force-recreate --pull always ║
╚══════════════════════════════════════════════════════════════════════════════╝
no configuration file provided: not found
Error: ❌ Docker command failed
```
**Cause:** Dokploy is looking for the default `docker-compose.yml` file, which doesn't exist. We use environment-specific files (`docker-compose.dev.yml`, `docker-compose.prod.yml`, etc.).
**Solution:** Configure a **custom Docker Compose command** in Dokploy:
1. Go to your application in Dokploy UI
2. Navigate to **Settings** → **Advanced** (or similar section)
3. Find **"Custom Command"** field
4. Set it to:
```bash
compose -p <app-name> -f ./docker-compose.{env}.yml up -d --remove-orphans --pull always
```
Replace:
- `<app-name>` with your actual Dokploy app name (e.g., `aistackportal-portal-0rohwx`)
- `{env}` with `dev`, `staging`, or `prod`
**Example for production:**
```bash
compose -p aistackportal-portal-0rohwx -f ./docker-compose.prod.yml up -d --remove-orphans --pull always
```
5. Save and redeploy
**Why the `-f` flag is needed:** Docker Compose defaults to looking for `docker-compose.yml`. The `-f` flag explicitly specifies which file to use.
### Health Check Failing
```bash
# SSH into Dokploy host
ssh user@10.100.0.20
# Check container logs
docker logs ai-stack-deployer-dev
# Test health endpoint
curl http://localhost:3000/health
```
### Webhook Not Triggering
1. **In Dokploy**, check webhook configuration
2. **In Gitea**, go to repo Settings → Webhooks
3. Verify webhook URL and secret match
4. Check recent deliveries for errors
---
## Production Considerations
### 1. Image Size Optimization
The Docker image excludes dev files via `.dockerignore`:
- ✅ `docs/` - excluded
- ✅ `scripts/` - excluded
- ✅ `.gitea/` - excluded
- ✅ `*.md` (except README.md) - excluded
Current image size: ~150MB
### 2. Security
- Container runs as non-root user (`nodejs:1001`)
- No secrets in source code (uses `.env`)
- Dokploy API accessible only on internal network
### 3. Monitoring
Set up alerts for:
- Container health check failures
- Memory/CPU usage spikes
- Deployment failures
### 4. Backup Strategy
- **Database**: This app has no database (stateless)
- **Configuration**: Environment variables stored in Dokploy (backed up)
- **Code**: Stored in Gitea (backed up)
---
## Summary
| Environment | Domain | Image Tag | Auto-Deploy? |
|-------------|------------------------------|-----------|--------------|
| Dev | portal-dev.ai.flexinit.nl | `dev` | ✅ On push |
| Staging | portal-staging.ai.flexinit.nl | `staging` | ✅ On push |
| Production | portal.ai.flexinit.nl | `latest` | ✅ On push |
**Next Steps**:
1. ✅ Push changes to `dev` branch
2. ⏳ Create 3 Dokploy applications (dev, staging, prod)
3. ⏳ Configure webhooks for each branch
4. ⏳ Deploy and test each environment
---
**Questions?** Check the main README.md or CLAUDE.md for more details.

View File

@@ -0,0 +1,313 @@
# Shared Project Deployment Architecture
## Overview
The AI Stack Deployer portal deploys **all user AI stacks to a single shared Dokploy project** instead of creating a new project for each user.
---
## Architecture Diagram
```
┌─────────────────────────────────────────────────────────────────┐
│ Dokploy: ai-stack-portal (Shared Project) │
│ ID: 2y2Glhz5Wy0dBNf6BOR_- │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 📦 Portal Application: ai-stack-deployer-prod │
│ ├─ Domain: portal.ai.flexinit.nl │
│ ├─ Image: git.app.flexinit.nl/.../ai-stack-deployer:latest│
│ └─ Env: SHARED_PROJECT_ID=$${{project.SHARED_PROJECT_ID}} │
│ │
│ ───────────────────────────────────────────────────────────── │
│ │
│ 📦 User Stack: john-dev │
│ ├─ Domain: john-dev.ai.flexinit.nl │
│ ├─ Image: git.app.flexinit.nl/.../agent-stack:latest │
│ └─ Deployed by: Portal │
│ │
│ 📦 User Stack: jane-prod │
│ ├─ Domain: jane-prod.ai.flexinit.nl │
│ ├─ Image: git.app.flexinit.nl/.../agent-stack:latest │
│ └─ Deployed by: Portal │
│ │
│ 📦 User Stack: alice-test │
│ ├─ Domain: alice-test.ai.flexinit.nl │
│ ├─ Image: git.app.flexinit.nl/.../agent-stack:latest │
│ └─ Deployed by: Portal │
│ │
└─────────────────────────────────────────────────────────────────┘
```
---
## How It Works
### Step 1: Portal Reads Configuration
When a user submits a stack name (e.g., "john-dev"), the portal:
1. **Reads environment variables**:
```javascript
const sharedProjectId = process.env.SHARED_PROJECT_ID;
const sharedEnvironmentId = process.env.SHARED_ENVIRONMENT_ID;
```
2. **These are set via Dokploy's project-level variables**:
```yaml
environment:
- SHARED_PROJECT_ID=$${{project.SHARED_PROJECT_ID}}
- SHARED_ENVIRONMENT_ID=$${{project.SHARED_ENVIRONMENT_ID}}
```
**Note**: The double `$$` is required to escape the dollar sign in Docker Compose.
### Step 2: Portal Deploys to Shared Project
Instead of creating a new project, the portal:
```javascript
// OLD BEHAVIOR (legacy):
// createProject(`ai-stack-${username}`) ❌ Creates new project per user
// NEW BEHAVIOR (current):
// Uses existing shared project ID ✅
const projectId = sharedProjectId; // From environment variable
const environmentId = sharedEnvironmentId;
// Creates application IN the shared project
createApplication({
projectId: projectId,
environmentId: environmentId,
name: `${username}-stack`,
image: 'git.app.flexinit.nl/.../agent-stack:latest',
domain: `${username}.ai.flexinit.nl`
});
```
### Step 3: User Accesses Their Stack
User visits `https://john-dev.ai.flexinit.nl` → Traefik routes to their application inside the shared project.
---
## Configuration Steps
### 1. Create Shared Project in Dokploy
1. In Dokploy UI, create project:
- **Name**: `ai-stack-portal`
- **Description**: "Shared project for all user AI stacks"
2. Get the **Project ID**:
```bash
# Via API
curl -s "http://10.100.0.20:3000/api/project.all" \
-H "Authorization: Bearer $DOKPLOY_API_TOKEN" | \
jq -r '.[] | select(.name=="ai-stack-portal") | .id'
# Output: 2y2Glhz5Wy0dBNf6BOR_-
```
3. Get the **Environment ID**:
```bash
curl -s "http://10.100.0.20:3000/api/project.one?projectId=2y2Glhz5Wy0dBNf6BOR_-" \
-H "Authorization: Bearer $DOKPLOY_API_TOKEN" | \
jq -r '.environments[0].id'
# Output: RqE9OFMdLwkzN7pif1xN8
```
### 2. Set Project-Level Variables
In the shared project (`ai-stack-portal`), add these **project-level environment variables**:
| Variable | Value | Example |
|----------|-------|---------|
| `SHARED_PROJECT_ID` | Your project ID | `2y2Glhz5Wy0dBNf6BOR_-` |
| `SHARED_ENVIRONMENT_ID` | Your environment ID | `RqE9OFMdLwkzN7pif1xN8` |
**How to set in Dokploy UI**:
- Go to Project → Settings → Environment Variables
- Add variables at **project level** (not application level)
### 3. Deploy Portal Application
Deploy the portal **inside the same shared project**:
1. **Application Details**:
- Name: `ai-stack-deployer-prod`
- Type: Docker Compose
- Compose File: `docker-compose.prod.yml`
- Branch: `main`
2. **The docker-compose file automatically references project variables**:
```yaml
environment:
- SHARED_PROJECT_ID=${{project.SHARED_PROJECT_ID}} # ← Magic happens here
- SHARED_ENVIRONMENT_ID=${{project.SHARED_ENVIRONMENT_ID}}
```
3. **Dokploy resolves `${{project.VAR}}`** to the actual value from project-level variables.
---
## Benefits
### ✅ Centralized Management
All user stacks in one place:
- Easy to list all active stacks
- Shared monitoring dashboard
- Centralized logging
### ✅ Resource Efficiency
- No overhead of separate projects per user
- Shared network and resources
- Easier to manage quotas
### ✅ Simplified Configuration
- Project-level environment variables shared by all stacks
- Single source of truth for common configs
- Easy to update STACK_IMAGE for all users
### ✅ Better Organization
```
Projects in Dokploy:
├── ai-stack-portal (500 user applications) ✅ Clean
└── NOT:
├── ai-stack-john
├── ai-stack-jane
├── ai-stack-alice
└── ... (500 separate projects) ❌ Messy
```
---
## Fallback Behavior
If `SHARED_PROJECT_ID` and `SHARED_ENVIRONMENT_ID` are **not set**, the portal falls back to **legacy behavior**:
```javascript
// Code in src/orchestrator/production-deployer.ts (lines 187-196)
const sharedProjectId = config.sharedProjectId || process.env.SHARED_PROJECT_ID;
const sharedEnvironmentId = config.sharedEnvironmentId || process.env.SHARED_ENVIRONMENT_ID;
if (sharedProjectId && sharedEnvironmentId) {
// Use shared project ✅
state.resources.projectId = sharedProjectId;
state.resources.environmentId = sharedEnvironmentId;
return;
}
// Fallback: Create separate project per user ⚠️
const projectName = `ai-stack-${config.stackName}`;
const existingProject = await this.client.findProjectByName(projectName);
// ...
```
**This ensures backwards compatibility** but is not recommended.
---
## Troubleshooting
### Portal Creates Separate Projects Instead of Using Shared Project
**Cause**: `SHARED_PROJECT_ID` or `SHARED_ENVIRONMENT_ID` not set.
**Solution**:
1. Check project-level variables in Dokploy:
```bash
curl -s "http://10.100.0.20:3000/api/project.one?projectId=YOUR_PROJECT_ID" \
-H "Authorization: Bearer $DOKPLOY_API_TOKEN" | \
jq '.environmentVariables'
```
2. Ensure the portal application's docker-compose references them:
```yaml
environment:
- SHARED_PROJECT_ID=${{project.SHARED_PROJECT_ID}}
- SHARED_ENVIRONMENT_ID=${{project.SHARED_ENVIRONMENT_ID}}
```
3. Redeploy the portal application.
### Variable Reference Not Working
**Symptom**: Portal logs show `undefined` for `SHARED_PROJECT_ID`.
**Cause**: Using wrong syntax.
**Correct syntax**:
```yaml
- SHARED_PROJECT_ID=${{project.SHARED_PROJECT_ID}} ✅
```
**Wrong syntax**:
```yaml
- SHARED_PROJECT_ID=${SHARED_PROJECT_ID} ❌ (shell substitution, not Dokploy)
- SHARED_PROJECT_ID={{project.SHARED_PROJECT_ID}} ❌ (missing $)
```
### How to Verify Configuration
Check portal container environment:
```bash
# SSH into Dokploy host
ssh user@10.100.0.20
# Inspect portal container
docker exec ai-stack-deployer env | grep SHARED
# Should show:
SHARED_PROJECT_ID=2y2Glhz5Wy0dBNf6BOR_-
SHARED_ENVIRONMENT_ID=RqE9OFMdLwkzN7pif1xN8
```
---
## Environment-Specific Shared Projects
You can have **separate shared projects for dev/staging/prod**:
| Portal Environment | Shared Project | Purpose |
|--------------------|----------------|---------|
| Dev | `ai-stack-portal-dev` | Development user stacks |
| Staging | `ai-stack-portal-staging` | Staging user stacks |
| Prod | `ai-stack-portal` | Production user stacks |
Each portal deployment references its own shared project:
- `portal-dev.ai.flexinit.nl` → `ai-stack-portal-dev`
- `portal-staging.ai.flexinit.nl` → `ai-stack-portal-staging`
- `portal.ai.flexinit.nl` → `ai-stack-portal`
---
## Migration from Legacy
If you're currently using the legacy behavior (separate projects per user):
### Option 1: Gradual Migration
- New deployments use shared project
- Old deployments remain in separate projects
- Migrate old stacks manually over time
### Option 2: Full Migration
1. Create shared project
2. Set project-level variables
3. Redeploy all user stacks to shared project
4. Delete old separate projects
**Note**: Migration requires downtime for each stack being moved.
---
## Reference
- **Environment Variable Syntax**: See Dokploy docs on project-level variables
- **Code Location**: `src/orchestrator/production-deployer.ts` (lines 178-200)
- **Example IDs**: `.env.example` (lines 25-27)
---
**Questions?** Check the main deployment guide: `DOKPLOY_DEPLOYMENT.md`

View File

@@ -261,3 +261,113 @@ Authorization: token <your-api-token>
|-----|---------| |-----|---------|
| `GITEA_API_TOKEN` | Gitea API access for workflow status | | `GITEA_API_TOKEN` | Gitea API access for workflow status |
| `DOKPLOY_API_TOKEN` | Dokploy deployment API (BWS ID: `6b3618fc-ba02-49bc-bdc8-b3c9004087bc`) | | `DOKPLOY_API_TOKEN` | Dokploy deployment API (BWS ID: `6b3618fc-ba02-49bc-bdc8-b3c9004087bc`) |
---
## Testing Session: 2026-01-13
### Session Summary
**Goal:** Verify multi-environment deployment setup and shared project configuration.
### Completed Tasks
| Task | Status | Evidence |
|------|--------|----------|
| Workflow separation (dev/staging/main) | ✅ | Committed as `eb2745d` |
| Dollar sign escaping (`$${{project.VAR}}`) | ✅ | Verified in all docker-compose.*.yml |
| Shared project exists | ✅ | `ai-stack-portal` (ID: `2y2Glhz5Wy0dBNf6BOR_-`) |
| Environment IDs retrieved | ✅ | See below |
| Local dev server health | ✅ | `/health` returns healthy |
### Environment IDs
```
Project: ai-stack-portal
ID: 2y2Glhz5Wy0dBNf6BOR_-
Environments:
- production: _dKAmxVcadqi-z73wKpEB (default)
- deployments: RqE9OFMdLwkzN7pif1xN8 (for user stacks)
- test: KVKn5fXGz10g7KVxPWOQj
```
### Blockers Identified
#### BLOCKER: Dokploy API Token Permissions
**Symptom:** All Dokploy API calls return `Forbidden`
```bash
# Previously working
curl -s "https://app.flexinit.nl/api/project.all" -H "x-api-key: $DOKPLOY_API_TOKEN"
# Now returns: Forbidden
# Environment endpoint
curl -s "https://app.flexinit.nl/api/environment.one?environmentId=RqE9OFMdLwkzN7pif1xN8" -H "x-api-key: $DOKPLOY_API_TOKEN"
# Returns: Forbidden
```
**Root Cause:** The API token `app_deployment...` has been revoked or has limited scope.
**Impact:**
- Cannot verify Docker image exists in registry
- Cannot test name availability (requires `environment.one`)
- Cannot create applications or compose stacks
- Cannot deploy portal to Dokploy
**Resolution Required:**
1. Log into Dokploy UI at https://app.flexinit.nl
2. Navigate to Settings → API Keys
3. Generate new API key with full permissions:
- Read/Write access to projects
- Read/Write access to applications
- Read/Write access to compose stacks
- Read/Write access to domains
4. Update `.env` with new token
5. Update BWS secret (ID: `6b3618fc-ba02-49bc-bdc8-b3c9004087bc`)
### Local Testing Results
```bash
# Health check - WORKS
curl -s "http://localhost:3000/health"
# {"status":"healthy","timestamp":"2026-01-13T13:01:46.100Z","version":"0.2.0",...}
# Name check - FAILS (API token issue)
curl -s "http://localhost:3000/api/check/test-stack"
# {"available":false,"valid":false,"error":"Failed to check availability"}
```
### Required .env Configuration
```bash
# Added for shared project deployment
SHARED_PROJECT_ID=2y2Glhz5Wy0dBNf6BOR_-
SHARED_ENVIRONMENT_ID=RqE9OFMdLwkzN7pif1xN8
```
### Next Steps After Token Fix
1. Verify `project.all` API works with new token
2. Deploy portal to Dokploy (docker-compose.dev.yml)
3. Test end-to-end stack deployment
4. Verify stacks deploy to shared project
5. Clean up test deployments
### Commands Reference
```bash
# Test API token
source .env && curl -s "https://app.flexinit.nl/api/project.all" \
-H "x-api-key: $DOKPLOY_API_TOKEN" | jq '.[].name'
# Get environment applications
source .env && curl -s "https://app.flexinit.nl/api/environment.one?environmentId=RqE9OFMdLwkzN7pif1xN8" \
-H "x-api-key: $DOKPLOY_API_TOKEN" | jq '.applications'
# Deploy test stack
curl -X POST http://localhost:3000/api/deploy \
-H "Content-Type: application/json" \
-d '{"name":"test-'$(date +%s | tail -c 4)'"}'
```

View File

@@ -10,9 +10,10 @@ import type { DeploymentState as OrchestratorDeploymentState } from './orchestra
const PORT = parseInt(process.env.PORT || '3000', 10); const PORT = parseInt(process.env.PORT || '3000', 10);
const HOST = process.env.HOST || '0.0.0.0'; const HOST = process.env.HOST || '0.0.0.0';
// Extended deployment state for HTTP server (adds logs) // Extended deployment state for HTTP server (adds logs and language)
interface HttpDeploymentState extends OrchestratorDeploymentState { interface HttpDeploymentState extends OrchestratorDeploymentState {
logs: string[]; logs: string[];
lang: string;
} }
const deployments = new Map<string, HttpDeploymentState>(); const deployments = new Map<string, HttpDeploymentState>();
@@ -90,6 +91,7 @@ async function deployStack(deploymentId: string): Promise<void> {
registryId: process.env.STACK_REGISTRY_ID, registryId: process.env.STACK_REGISTRY_ID,
sharedProjectId: process.env.SHARED_PROJECT_ID, sharedProjectId: process.env.SHARED_PROJECT_ID,
sharedEnvironmentId: process.env.SHARED_ENVIRONMENT_ID, sharedEnvironmentId: process.env.SHARED_ENVIRONMENT_ID,
lang: deployment.lang,
}); });
// Final update with logs // Final update with logs
@@ -144,7 +146,7 @@ app.get('/health', (c) => {
app.post('/api/deploy', async (c) => { app.post('/api/deploy', async (c) => {
try { try {
const body = await c.req.json(); const body = await c.req.json();
const { name } = body; const { name, lang = 'en' } = body;
// Validate name // Validate name
const validation = validateStackName(name); const validation = validateStackName(name);
@@ -197,6 +199,7 @@ app.post('/api/deploy', async (c) => {
started: new Date().toISOString(), started: new Date().toISOString(),
}, },
logs: [], logs: [],
lang,
}; };
deployments.set(deploymentId, deployment); deployments.set(deploymentId, deployment);

65
src/lib/i18n-backend.ts Normal file
View File

@@ -0,0 +1,65 @@
export const backendTranslations = {
en: {
'initializing': 'Initializing deployment',
'creatingProject': 'Creating project',
'gettingEnvironment': 'Getting environment ID',
'environmentAvailable': 'Environment ID already available',
'environmentRetrieved': 'Environment ID retrieved',
'creatingApplication': 'Creating application',
'configuringApplication': 'Configuring application',
'creatingDomain': 'Creating domain',
'deployingApplication': 'Deploying application',
'waitingForSSL': 'Waiting for SSL certificate provisioning...',
'waitingForStart': 'Waiting for application to start',
'deploymentSuccess': 'Application deployed successfully',
'verifyingHealth': 'Verifying application health',
},
nl: {
'initializing': 'Implementatie initialiseren',
'creatingProject': 'Project aanmaken',
'gettingEnvironment': 'Omgeving ID ophalen',
'environmentAvailable': 'Omgeving ID al beschikbaar',
'environmentRetrieved': 'Omgeving ID opgehaald',
'creatingApplication': 'Applicatie aanmaken',
'configuringApplication': 'Applicatie configureren',
'creatingDomain': 'Domein aanmaken',
'deployingApplication': 'Applicatie implementeren',
'waitingForSSL': 'Wachten op SSL-certificaat...',
'waitingForStart': 'Wachten tot applicatie start',
'deploymentSuccess': 'Applicatie succesvol geïmplementeerd',
'verifyingHealth': 'Applicatie gezondheid verifiëren',
},
ar: {
'initializing': 'جاري التهيئة',
'creatingProject': 'إنشاء المشروع',
'gettingEnvironment': 'الحصول على معرف البيئة',
'environmentAvailable': 'معرف البيئة متاح بالفعل',
'environmentRetrieved': 'تم استرداد معرف البيئة',
'creatingApplication': 'إنشاء التطبيق',
'configuringApplication': 'تكوين التطبيق',
'creatingDomain': 'إنشاء النطاق',
'deployingApplication': 'نشر التطبيق',
'waitingForSSL': 'انتظار شهادة SSL...',
'waitingForStart': 'انتظار بدء التطبيق',
'deploymentSuccess': 'تم نشر التطبيق بنجاح',
'verifyingHealth': 'التحقق من صحة التطبيق',
},
} as const;
export type BackendLanguage = keyof typeof backendTranslations;
export type BackendTranslationKey = keyof typeof backendTranslations.en;
export function createTranslator(lang: BackendLanguage = 'en') {
return (key: BackendTranslationKey, params?: Record<string, string | number>): string => {
const translations = backendTranslations[lang] || backendTranslations.en;
let text: string = translations[key];
if (params) {
Object.entries(params).forEach(([paramKey, value]) => {
text = text.replace(`{${paramKey}}`, String(value));
});
}
return text;
};
}

View File

@@ -11,6 +11,7 @@
*/ */
import { DokployProductionClient } from '../api/dokploy-production.js'; import { DokployProductionClient } from '../api/dokploy-production.js';
import { createTranslator, type BackendLanguage } from '../lib/i18n-backend.js';
export interface DeploymentConfig { export interface DeploymentConfig {
stackName: string; stackName: string;
@@ -22,6 +23,7 @@ export interface DeploymentConfig {
registryId?: string; registryId?: string;
sharedProjectId?: string; sharedProjectId?: string;
sharedEnvironmentId?: string; sharedEnvironmentId?: string;
lang?: string;
} }
export interface DeploymentState { export interface DeploymentState {
@@ -71,10 +73,12 @@ export type ProgressCallback = (state: DeploymentState) => void;
export class ProductionDeployer { export class ProductionDeployer {
private client: DokployProductionClient; private client: DokployProductionClient;
private progressCallback?: ProgressCallback; private progressCallback?: ProgressCallback;
private t: ReturnType<typeof createTranslator>;
constructor(client: DokployProductionClient, progressCallback?: ProgressCallback) { constructor(client: DokployProductionClient, progressCallback?: ProgressCallback) {
this.client = client; this.client = client;
this.progressCallback = progressCallback; this.progressCallback = progressCallback;
this.t = createTranslator('en');
} }
private notifyProgress(state: DeploymentState): void { private notifyProgress(state: DeploymentState): void {
@@ -87,13 +91,15 @@ export class ProductionDeployer {
* Deploy a complete AI stack with full production safeguards * Deploy a complete AI stack with full production safeguards
*/ */
async deploy(config: DeploymentConfig): Promise<DeploymentResult> { async deploy(config: DeploymentConfig): Promise<DeploymentResult> {
this.t = createTranslator((config.lang || 'en') as BackendLanguage);
const state: DeploymentState = { const state: DeploymentState = {
id: `dep_${Date.now()}_${Math.random().toString(36).substring(2, 9)}`, id: `dep_${Date.now()}_${Math.random().toString(36).substring(2, 9)}`,
stackName: config.stackName, stackName: config.stackName,
phase: 'initializing', phase: 'initializing',
status: 'in_progress', status: 'in_progress',
progress: 0, progress: 0,
message: 'Initializing deployment', message: this.t('initializing'),
resources: {}, resources: {},
timestamps: { timestamps: {
started: new Date().toISOString(), started: new Date().toISOString(),
@@ -228,12 +234,12 @@ export class ProductionDeployer {
private async getEnvironment(state: DeploymentState): Promise<void> { private async getEnvironment(state: DeploymentState): Promise<void> {
state.phase = 'getting_environment'; state.phase = 'getting_environment';
state.progress = 25; state.progress = 25;
state.message = 'Getting environment ID'; state.message = this.t('gettingEnvironment');
// Skip if we already have environment ID from project creation // Skip if we already have environment ID from project creation
if (state.resources.environmentId) { if (state.resources.environmentId) {
console.log('Environment ID already available from project creation'); console.log('Environment ID already available from project creation');
state.message = 'Environment ID already available'; state.message = this.t('environmentAvailable');
return; return;
} }
@@ -243,7 +249,7 @@ export class ProductionDeployer {
const environment = await this.client.getDefaultEnvironment(state.resources.projectId); const environment = await this.client.getDefaultEnvironment(state.resources.projectId);
state.resources.environmentId = environment.environmentId; state.resources.environmentId = environment.environmentId;
state.message = 'Environment ID retrieved'; state.message = this.t('environmentRetrieved');
} }
private async createOrFindApplication( private async createOrFindApplication(
@@ -252,7 +258,7 @@ export class ProductionDeployer {
): Promise<void> { ): Promise<void> {
state.phase = 'creating_application'; state.phase = 'creating_application';
state.progress = 40; state.progress = 40;
state.message = 'Creating application'; state.message = this.t('creatingApplication');
if (!state.resources.environmentId) { if (!state.resources.environmentId) {
throw new Error('Environment ID not available'); throw new Error('Environment ID not available');
@@ -279,7 +285,7 @@ export class ProductionDeployer {
): Promise<void> { ): Promise<void> {
state.phase = 'configuring_application'; state.phase = 'configuring_application';
state.progress = 50; state.progress = 50;
state.message = 'Configuring application with Docker image'; state.message = this.t('configuringApplication');
if (!state.resources.applicationId) { if (!state.resources.applicationId) {
throw new Error('Application ID not available'); throw new Error('Application ID not available');
@@ -332,7 +338,7 @@ export class ProductionDeployer {
): Promise<void> { ): Promise<void> {
state.phase = 'creating_domain'; state.phase = 'creating_domain';
state.progress = 70; state.progress = 70;
state.message = 'Creating domain'; state.message = this.t('creatingDomain');
if (!state.resources.applicationId) { if (!state.resources.applicationId) {
throw new Error('Application ID not available'); throw new Error('Application ID not available');
@@ -359,7 +365,7 @@ export class ProductionDeployer {
private async deployApplication(state: DeploymentState): Promise<void> { private async deployApplication(state: DeploymentState): Promise<void> {
state.phase = 'deploying'; state.phase = 'deploying';
state.progress = 85; state.progress = 85;
state.message = 'Triggering deployment'; state.message = this.t('deployingApplication');
if (!state.resources.applicationId) { if (!state.resources.applicationId) {
throw new Error('Application ID not available'); throw new Error('Application ID not available');
@@ -375,7 +381,7 @@ export class ProductionDeployer {
): Promise<void> { ): Promise<void> {
state.phase = 'verifying_health'; state.phase = 'verifying_health';
state.progress = 95; state.progress = 95;
state.message = 'Verifying application status via Dokploy'; state.message = this.t('verifyingHealth');
if (!state.resources.applicationId) { if (!state.resources.applicationId) {
throw new Error('Application ID not available'); throw new Error('Application ID not available');
@@ -392,13 +398,13 @@ export class ProductionDeployer {
console.log(`Application status: ${appStatus}`); console.log(`Application status: ${appStatus}`);
if (appStatus === 'done') { if (appStatus === 'done') {
state.message = 'Waiting for SSL certificate provisioning...'; state.message = this.t('waitingForSSL');
state.progress = 98; state.progress = 98;
this.notifyProgress(state); this.notifyProgress(state);
await this.sleep(15000); await this.sleep(15000);
state.message = 'Application deployed successfully'; state.message = this.t('deploymentSuccess');
return; return;
} }
@@ -410,7 +416,7 @@ export class ProductionDeployer {
} }
const elapsed = Math.round((Date.now() - startTime) / 1000); const elapsed = Math.round((Date.now() - startTime) / 1000);
state.message = `Waiting for application to start (${elapsed}s)...`; state.message = `${this.t('waitingForStart')} (${elapsed}s)...`;
this.notifyProgress(state); this.notifyProgress(state);
await this.sleep(interval); await this.sleep(interval);