Compare commits

10 Commits

47 changed files with 1287 additions and 28 deletions

4
.gitignore vendored
View File

@@ -1,3 +1,5 @@
volumes
apps/proxy
.DS_Store
.DS_Store
apps/administration/*
apps/tools/app/*

195
README.md
View File

@@ -1,3 +1,196 @@
# mindboost-infrastructure
All the software used and hosted by mindboost organized in containers.
All the software used and hosted by mindboost organized in containers.
## Project Structure
./apps/
├── docker-compose.all.yml # Orchestriert alle Docker Compose Stacks
├── frontend/
│ ├── docker-compose.yml
│ └── src/ # Vue.js frontend source code
├── backend/
│ ├── docker-compose.yml
│ └── src/ # Laravel backend source code
├── database/
│ └── docker-compose.yml # MariaDB stack
├── website/
│ └── docker-compose.yml # KirbyCMS public site stack
├── administration/
│ └── docker-compose.yml # Portainer stack
├── proxy/
│ └── docker-compose.yml # Traefik, Crowdsec, and Bouncer stack
├── develop/
│ └── docker-compose.yml # Gitea, Jenkins, and Adminer stack
└── tools/
└── docker-compose.yml # Nextcloud, LimeSurvey, and LinkStack stack
## Current Services
1. Frontend (Vue.js)
2. Backend (Laravel)
3. Database (MariaDB)
4. Proxy (Traefik, Crowdsec, Bouncer)
## Upcoming Services
1. Website (KirbyCMS)
2. Administration (Portainer)
3. Development Tools (Gitea, Jenkins, Adminer)
4. Utility Tools (Nextcloud, LimeSurvey, LinkStack)
## Service Descriptions
### Current Services
- **Frontend**: Vue.js based user interface for the mindboost application.
- **Backend**: Laravel based API and server-side logic for the mindboost application.
- **Database**: MariaDB for data storage and management.
- **Proxy**: Traefik for reverse proxy, Crowdsec and Bouncer for security.
### Upcoming Services
- **Website**: KirbyCMS for the public-facing website.
- **Administration**: Portainer for container management and monitoring.
- **Development Tools**:
- Gitea: Self-hosted Git service
- Jenkins: Continuous Integration/Continuous Deployment (CI/CD) tool
- Adminer: Database management tool
- **Utility Tools**:
- Nextcloud: File hosting and collaboration platform
- LimeSurvey: Online survey tool
- LinkStack: Link management tool
## Deployment
Each service or group of related services has its own `docker-compose.yml` file in its respective folder under `./apps/`. This structure allows for modular deployment and easier management of individual services.
To deploy a service, navigate to its directory and run:
```bash
docker-compose up -d
```
For the entire infrastructure, a root `docker-compose.yml` file can be created to orchestrate all services together.
## Environment Configuration
Environment variables are managed in a centralized `env` folder at the root of the project. This structure allows for easy management of different environments and services.
./env/
├── development/
│ ├── frontend.env
│ ├── backend.env
│ ├── database.env
│ └── ...
├── staging/
│ ├── frontend.env
│ ├── backend.env
│ ├── database.env
│ └── ...
└── production/
├── frontend.env
├── backend.env
├── database.env
└── ...
Each service's `docker-compose.yml` file references the appropriate `.env` file based on the current environment. For example:
```yaml
services:
backend:
env_file:
- ../../env/${ENVIRONMENT}/backend.env
```
## Networking
Our infrastructure uses a two-tier network model to enhance security and isolate services:
1. Proxy Network (proxy_network):
- Exposed to the internet and contains the Traefik reverse proxy.
- Only services that need to be publicly accessible should be connected to this network.
- Example services: Traefik, frontend application.
2. Internal Networks:
- Separate internal networks are created for each public service that needs to communicate with internal services.
- These networks are not directly accessible from the internet and provide secure communication between public and internal services.
- Examples: backend_network, database_network, etc.
Service Network Configuration:
- Frontend: Connected to proxy_network and backend_network
- Backend API: Connected to backend_network and database_network
- Database: Connected only to database_network
- Traefik: Connected only to proxy_network
This structure ensures that:
- The proxy (Traefik) can route traffic to public-facing services.
- Internal services (like databases) are not directly accessible from the proxy network.
- Each connection between a public and an internal service has its own isolated network.
This configuration minimizes the attack surface by isolating networks and ensuring that services only have access to the networks they absolutely need. Each connection between a public and an internal service is protected by a dedicated internal network, further enhancing security.
## Volumes
Persistent data should be managed using named volumes or bind mounts, depending on the requirements of each service. This ensures data persistence across container restarts and updates.
The `volumes/` folder contains subdirectories for different volumes used by various applications in the infrastructure. This centralized structure allows for easier management and backup of persistent data.
./volumes/
├── backend/ # Volume for backend-specific data
├── database/ # Volume for MariaDB data
├── website/ # Volume for KirbyCMS data
├── administration/ # Volume for Portainer data
├── develop/
│ ├── gitea/ # Volume for Gitea repositories and data
│ └── jenkins/ # Volume for Jenkins data and job configurations
└── tools/
├── nextcloud/ # Volume for Nextcloud files and data
├── limesurvey/ # Volume for LimeSurvey data
└── linkstack/ # Volume for LinkStack data
Each subdirectory corresponds to a specific service or group of services, containing the persistent data that needs to be preserved across container restarts or redeployments.
When configuring Docker Compose files, reference these volume paths to ensure data persistence. For example:
```yaml
volumes:
- ./volumes/database:/var/lib/mysql
```
## Scripts
The `scripts/` folder contains a collection of utility scripts for deployment, backup, and maintenance tasks. These scripts are designed to automate common operations and ensure consistency across different environments.
./scripts/
├── deployment/
│ ├── deploy-app.sh # Script for deploying the main application
│ └── deploy-traefik.sh # Script for deploying Traefik
├── backup/
│ ├── backup-database.sh # Script for backing up the database
│ └── backup-files.sh # Script for backing up file storage
└── maintenance/
├── update-services.sh # Script for updating all services
└── health-check.sh # Script for performing health checks on services
These scripts can be run from the command line to perform various tasks related to the infrastructure. Always review and test scripts in a safe environment before using them in production.
To use a script, navigate to the scripts directory and run:
```bash
./script-name.sh

View File

@@ -0,0 +1,36 @@
### Backend (./apps/backend/docker-compose.yml)
services:
backend:
container_name: ${INFRASTRUCTURE_LABEL}-laravel-${ENVIRONMENT}
profiles: ["laravel", "backend", "all", "app"]
env_file:
- ../../env/.env.all
- ../../env/${ENVIRONMENT}/.env.proxy
- ../../env/${ENVIRONMENT}/.env.database
- ../../env/${ENVIRONMENT}/.env.backend
depends_on:
- database
build:
context: ./src
dockerfile: Dockerfile
labels:
- "traefik.enable=${TRAEFIK_ENABLE}"
- "traefik.http.routers.backend.entrypoints=${TRAEFIK_ENTRYPOINT}"
- "traefik.http.routers.backend.rule=Host(`${BACKEND_DOMAIN}`)"
- "traefik.http.routers.backend.tls=true"
- "traefik.http.routers.backend.tls.certresolver=${TRAEFIK_CERT_RESOLVER}"
- "traefik.http.routers.backend.tls.domains[0].main=`${BACKEND_DOMAIN}`"
- "traefik.http.services.backend.loadbalancer.server.port=${BACKEND_PORT:-8000}"
- "traefik.docker.network=${TRAEFIK_NETWORK}"
# Traefik-Crowdsec Stack
backend-redis:
image: redis:alpine
container_name: ${INFRASTRUCTURE_LABEL}-laravelredis-${ENVIRONMENT}
profiles: ["redis", "backend", "all"]
restart: unless-stopped
command: redis-server --appendonly yes --requirepass laravel-redis-passwort # Redis Passwort eingeben
volumes:
- ../../volumes/backend/redis:/data
networks:
backend:

View File

@@ -0,0 +1,40 @@
### Database (./apps/database/docker-compose.yml)
# - [ ] Create a MariaDB service
# - [ ] Configure volumes for persistent storage of database data
# - [ ] Set up environment variables using the new structure (../../env/${ENVIRONMENT}/database.env)
# - [ ] Configure networking to allow connections from the backend service
# - [ ] Set up regular backup jobs for the database
# - [ ] Configure appropriate resource limits and restart policies
services:
database:
profiles: ["all", "mariadb", "backend", "app"]
image: mariadb:latest
container_name: ${INFRASTRUCTURE_LABEL}-mariadb-${ENVIRONMENT}
command: --bind-address=0.0.0.0
hostname: ${MARIADB_HOST}
env_file:
- ../../env/.env.all
- ../../env/${ENVIRONMENT:-development}/.env.database
- ../../env/${ENVIRONMENT:-development}/.env.proxy
environment:
- MARIADB_USER=${MARIADB_USER}
- MARIADB_DATABASE=${MARIADB_DATABASE}
- MARIADB_PASSWORD=${MARIADB_PASSWORD}
- MARIADB_ROOT_PASSWORD=root-mindboost
volumes:
- ../../volumes/database/mariadb:/var/lib/mysql
networks:
- backend
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
retries: 3
adminer:
profiles: ["all", "mariadb", "backend", "app"]
image: adminer
container_name: local_adminer
restart: always
ports:
- 8082:8080
networks:
- backend

View File

@@ -0,0 +1,9 @@
### Develop (./apps/develop/docker-compose.yml)
# - [ ] Create services for Gitea, Jenkins, and Adminer
# - [ ] Configure volumes for persistent storage of Git repositories, Jenkins data, and Adminer settings
# - [ ] Set up environment variables using the new structure (../../env/${ENVIRONMENT}/develop.env)
# - [ ] Configure networking to allow these services to communicate with each other and the necessary application services
# - [ ] Set up access controls and security measures for development tools
include:
- ./gitea/docker-compose.yml

View File

@@ -0,0 +1,44 @@
services:
gitea:
image: gitea/gitea:latest
container_name: ${INFRASTRUCTURE_LABEL:-mindboost}-gitea
profiles: ["all", "gitea","develop"]
restart: always
volumes:
- ${GITEA_VOLUME_PATH}:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- gitea_db
labels:
- "traefik.enable=${TRAEFIK_ENABLE}"
- "traefik.http.routers.gitea.entrypoints=${TRAEFIK_ENTRYPOINT}"
- "traefik.http.routers.gitea.rule=(Host(`${GITEA_DOMAIN})`)"
- "traefik.http.routers.gitea.tls=true"
- "traefik.http.routers.gitea.tls.certresolver=${TRAEFIK_CERT_RESOLVER}"
- "traefik.http.routers.gitea.service=gitea"
- 'traefik.http.services.gitea.loadbalancer.gitea.port=3000'
- "traefik.http.routers.gitea.tls.domains[0].main=`${GITEA_TLS_DOMAIN_MAIN}`"
# SSH routing, can't route based on host so anything to port 222 will come to this container
- "traefik.tcp.routers.gitea-ssh.rule=HostSNI(`*`)"
- "traefik.tcp.routers.gitea-ssh.entrypoints=ssh"
- "traefik.tcp.routers.gitea-ssh.service=gitea-ssh-svc"
- "traefik.tcp.services.gitea-ssh-svc.loadbalancer.gitea.port=22"
gitea_db:
image: mysql:latest
container_name: ${INFRASTRUCTURE_LABEL:-mindboost}-gitea_db
profiles: ["all", "gitea","develop"]
restart: always
environment:
- MYSQL_ROOT_PASSWORD=${GITEA_MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${GITEA_MYSQL_DATABASE}
- MYSQL_USER=${GITEA_MYSQL_USER}
- MYSQL_PASSWORD=${GITEA_MYSQL_PASSWORD}
volumes:
- ${GITEA_DATABASE_VOLUME_PATH}:/var/lib/mysql
networks:
gitea:

View File

@@ -0,0 +1,30 @@
version: '3.8'
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
ports:
- "50000:50000" # Jenkins Agent Port
volumes:
- jenkins_home:/var/jenkins_home
environment:
- JAVA_OPTS=-Djenkins.install.runSetupWizard=false
networks:
- proxy
labels:
- "traefik.enable=true"
- "traefik.http.routers.jenkins.rule=Host(`j.haslach2025.de`)"
- "traefik.http.routers.jenkins.entrypoints=websecure"
- "traefik.http.routers.jenkins.tls=true"
- "traefik.http.routers.jenkins.tls.certresolver=http_resolver"
- "traefik.http.services.jenkins.loadbalancer.server.port=8080" # interner Port von Jenkins
- "traefik.docker.network=proxy"
volumes:
jenkins_home:
driver: local
networks:
proxy:
external: true

View File

@@ -0,0 +1,48 @@
##
## ONE SCRIPT TO RULE THEM ALL
##
## Dieses Compose-File startet alle verfügbaren Services, abhängig von dem angegebenen ENVIRONMENT.
## Um diese Konfiguration zu verwenden, kannst du folgende Befehle nutzen:
## Um alle Services zu starten:
## docker compose -f docker-compose.all.yml --env-file ../env/.env.all --profile all up -d
## Um nur bestimmte Services zu starten (z.B. frontend und backend):
## docker compose -f docker-compose.all.yml --env-file ../env/.env.all --profile frontend --profile backend up -d
##
## Stellen Sie sicher, dass die .env.all Datei im angegebenen Verzeichnis existiert und den ENVIRONMENT Wert enthält.
##
include:
- path: ./proxy/docker-compose.yml
env_file:
- ../env/.env.all
- ../env/${ENVIRONMENT:-development}/.env.proxy
- path: ./frontend/docker-compose.yml
env_file:
- ../env/.env.all
- ../env/${ENVIRONMENT:-development}/.env.frontend
- ../env/${ENVIRONMENT:-development}/.env.proxy
- path: ./backend/docker-compose.yml
- path: ./database/docker-compose.yml
- path: ./website/docker-compose.yml
env_file:
- ../env/.env.all
- ../env/${ENVIRONMENT:-development}/.env.website
- ../env/${ENVIRONMENT:-development}/.env.proxy
- path: ./administration/docker-compose.yml
env_file:
- ../env/.env.all
- ../env/${ENVIRONMENT:-development}/.env.administration
- ../env/${ENVIRONMENT:-development}/.env.proxy
- path: ./develop/docker-compose.yml
env_file:
- ../env/.env.all
- ../env/${ENVIRONMENT:-development}/.env.develop
- ../env/${ENVIRONMENT:-development}/.env.proxy
- path: ./tools/docker-compose.yml
env_file:
- ../env/.env.all
- ../env/${ENVIRONMENT:-development}/.env.tools
- ../env/${ENVIRONMENT:-development}/.env.proxy

View File

@@ -0,0 +1,27 @@
### Frontend (./apps/frontend/docker-compose.yml)
# - [ ] Create a Vue.js frontend service
# - [ ] Set up a Node.js environment for the frontend
# - [ ] Configure volumes for persistent storage of frontend assets
# - [ ] Set up environment variables using the new structure (../../env/${ENVIRONMENT}/frontend.env)
# - [ ] Configure networking to communicate with the backend service
# - [ ] Set up healthchecks for the frontend service
services:
webapp:
build:
context: ./src
dockerfile: Dockerfile
container_name: ${INFRASTRUCTURE_LABEL}-frontend-${ENVIRONMENT}
profiles: ["webapp", "frontend", "all", "app"]
depends_on:
- database
- backend
labels:
- "traefik.enable=${TRAEFIK_ENABLE}"
- "traefik.http.routers.webapp.entrypoints=${TRAEFIK_ENTRYPOINT}"
- 'traefik.http.routers.webapp.rule=Host(`${FRONTEND_DOMAIN}`) || Host(`${FRONTEND_DOMAIN_2}`)'
- "traefik.http.routers.webapp.tls=true"
- "traefik.http.routers.webapp.tls.certresolver=${TRAEFIK_CERT_RESOLVER}"
- "traefik.http.routers.webapp.tls.domains[0].main=${FRONTEND_DOMAIN}"
- "traefik.http.routers.webapp.tls.domains[0].sans=${FRONTEND_DOMAIN_2}"
- "traefik.http.services.webapp.loadbalancer.server.port=3000"
- "traefik.docker.network=${TRAEFIK_NETWORK}"

View File

@@ -0,0 +1,30 @@
services:
wireguard:
image: linuxserver/wireguard
container_name: wireguard
cap_add:
- NET_ADMIN
- SYS_MODULE
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Berlin
- SERVERURL=${SERVER_IP:?"❌ ERROR = SERVERURL is not set. Run set-server-ip.sh first."}
- SERVERPORT=51820
- PEERS=3 # Number of VPN clients to generate
- PEERDNS=auto
- INTERNAL_SUBNET=22.22.22.0
volumes:
- ../../volumes/security/wireguard/config:/config
- /lib/modules:/lib/modules
ports:
- "51820:51820/udp"
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
restart: unless-stopped
networks:
- wireguard_network
networks:
wireguard_network:
driver: bridge

View File

@@ -0,0 +1,50 @@
volumes:
etc_wireguard:
services:
wg-easy:
environment:
# Change Language:
# (Supports: en, ua, ru, tr, no, pl, fr, de, ca, es, ko, vi, nl, is, pt, chs, cht, it, th, hi, ja, si)
- LANG=de
# ⚠️ Required:
# Change this to your host's public address
- WG_HOST=${SERVER_IP}
# Optional:
# - PASSWORD_HASH=$$2y$$10$$hBCoykrB95WSzuV4fafBzOHWKu9sbyVa34GJr8VV5R/pIelfEMYyG # (needs double $$, hash of 'foobar123'; see "How_to_generate_an_bcrypt_hash.md" for generate the hash)
# - PORT=51821
# - WG_PORT=51820
# - WG_CONFIG_PORT=92820
- WG_DEFAULT_ADDRESS=22.22.22.0
# - WG_DEFAULT_DNS=1.1.1.1
# - WG_MTU=1420
# - WG_ALLOWED_IPS=192.168.15.0/24, 10.0.1.0/24
# - WG_PERSISTENT_KEEPALIVE=25
# - WG_PRE_UP=echo "Pre Up" > /etc/wireguard/pre-up.txt
# - WG_POST_UP=echo "Post Up" > /etc/wireguard/post-up.txt
# - WG_PRE_DOWN=echo "Pre Down" > /etc/wireguard/pre-down.txt
# - WG_POST_DOWN=echo "Post Down" > /etc/wireguard/post-down.txt
# - UI_TRAFFIC_STATS=true
# - UI_CHART_TYPE=0 # (0 Charts disabled, 1 # Line chart, 2 # Area chart, 3 # Bar chart)
# - WG_ENABLE_ONE_TIME_LINKS=true
# - UI_ENABLE_SORT_CLIENTS=true
# - WG_ENABLE_EXPIRES_TIME=true
# - ENABLE_PROMETHEUS_METRICS=false
# - PROMETHEUS_METRICS_PASSWORD=$$2a$$12$$vkvKpeEAHD78gasyawIod.1leBMKg8sBwKW.pQyNsq78bXV3INf2G # (needs double $$, hash of 'prometheus_password'; see "How_to_generate_an_bcrypt_hash.md" for generate the hash)
image: ghcr.io/wg-easy/wg-easy
container_name: wg-easy
volumes:
- ../../volumes/wireguardeasy/:/etc/wireguard
ports:
- "51820:51820/udp"
- "51821:51821/tcp"
restart: unless-stopped
cap_add:
- NET_ADMIN
- SYS_MODULE
# - NET_RAW # ⚠️ Uncomment if using Podman
sysctls:
- net.ipv4.ip_forward=1
- net.ipv4.conf.all.src_valid_mark=1

View File

@@ -0,0 +1,2 @@
#!/bin/bash
export SERVER_IP=$(curl -s https://api.ipify.org)

View File

@@ -0,0 +1,67 @@
### Tools (./apps/tools/docker-compose.yml)
# - [ ] Create services for Nextcloud, LimeSurvey, and LinkStack
# - [ ] Configure volumes for persistent storage of files, survey data, and link management data
# - [ ] Set up environment variables using the new structure (../../env/${ENVIRONMENT}/tools.env)
# - [ ] Configure networking to expose these services to the internet via the proxy
# - [ ] Set up regular backup jobs for critical data in these services
services:
nextcloud-db:
image: mariadb:10.6
container_name: ${INFRASTRUCTURE_LABEL}-nextcloud-db-${ENVIRONMENT}
profiles: ["all", "tools", "nextcloud"]
command: --transaction-isolation=READ-COMMITTED --innodb_read_only_compressed=OFF
restart: unless-stopped
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ../../volumes/tools/${INFRASTRUCTURE_LABEL}_cloud/database:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=headpiece-constant1-denim-mindboost #SQL root Passwort eingeben
- MYSQL_PASSWORD=idealist9-frayed-murkiness-mindboost #SQL Benutzer Passwort eingeben
- MYSQL_DATABASE=nextcloud-mindboost #Datenbank Name
- MYSQL_USER=mindboostcloud #SQL Nutzername
- MYSQL_INITDB_SKIP_TZINFO=1
- MARIADB_AUTO_UPGRADE=1
nextcloud-redis:
image: redis:alpine
container_name: ${INFRASTRUCTURE_LABEL}-nextcloud-redis-${ENVIRONMENT}
profiles: ["all", "tools", "nextcloud"]
hostname: nextcloud-redis
restart: unless-stopped
command: redis-server --requirepass redis-mindboost-passwort # Redis Passwort eingeben
cloud:
image: nextcloud
container_name: ${INFRASTRUCTURE_LABEL}-nextcloud-app-${ENVIRONMENT}
profiles: ["all", "tools", "nextcloud"]
restart: unless-stopped
depends_on:
- nextcloud-db
- nextcloud-redis
environment:
TRUSTED_PROXIES: 172.16.255.254/16
OVERWRITEPROTOCOL: https
OVERWRITECLIURL: https://${CLOUD_DOMAIN}
OVERWRITEHOST: ${CLOUD_DOMAIN}
REDIS_HOST: nextcloud-redis
REDIS_HOST_PASSWORD: redis-mindboost-passwort # Redis Passwort von oben wieder eingeben
volumes:
- ./app:/var/www/html
- ../../volumes/tools/${INFRASTRUCTURE_LABEL}_cloudapp/:/var/www/html/data
labels:
- "traefik.enable=true"
- "traefik.http.routers.${INFRASTRUCTURE_LABEL}_cloud.entrypoints=websecure"
- "traefik.http.routers.${INFRASTRUCTURE_LABEL}_cloud.rule=Host(`${CLOUD_DOMAIN}`)"
- "traefik.http.routers.${INFRASTRUCTURE_LABEL}_cloud.tls=true"
- "traefik.http.routers.${INFRASTRUCTURE_LABEL}_cloud.tls.certresolver=http_resolver"
- 'traefik.http.routers.${INFRASTRUCTURE_LABEL}_cloud.service=cloud'
- "traefik.http.services.cloud.loadbalancer.server.port=80"
- "traefik.docker.network=${TRAEFIK_NETWORK}"
- "traefik.http.routers.${INFRASTRUCTURE_LABEL}_cloud.middlewares=nextcloud-dav,default@file"
- "traefik.http.middlewares.nextcloud-dav.replacepathregex.regex=^/.well-known/ca(l|rd)dav"
- "traefik.http.middlewares.nextcloud-dav.replacepathregex.replacement=/remote.php/dav/"
networks:
- ${TRAEFIK_NETWORK}
networks:
nextcloud:
name: ${INFRASTRUCTURE_LABEL}_nextcloud

View File

@@ -0,0 +1,23 @@
services:
kirbycms:
build:
context: ./kirby
dockerfile: Dockerfile
image: kirbycms
container_name: ${INFRASTRUCTURE_LABEL}-kirbycms-${ENVIRONMENT}
profiles: ["website","kirbycms","all"]
volumes:
- ../../volumes/website/kirbycms:/var/www/html:rw # Persistente Daten
restart: unless-stopped
networks:
- ${TRAEFIK_NETWORK}
labels:
- "traefik.enable=${TRAEFIK_ENABLE}"
- "traefik.docker.network=${TRAEFIK_NETWORK}"
- "traefik.http.routers.kirbycms.service=kirbycms"
- "traefik.http.routers.kirbycms.tls.certresolver=${TRAEFIK_CERT_RESOLVER}"
- "traefik.http.routers.kirbycms.tls.domains[0].main=`${WEBSITE_DOMAIN}`"
- "traefik.http.routers.kirbycms.rule=Host(`${WEBSITE_DOMAIN}`)"
- "traefik.http.routers.kirbycms.entrypoints=${TRAEFIK_ENTRYPOINT}"
- "traefik.http.routers.kirbycms.tls=true"
- "traefik.http.services.kirbycms.loadbalancer.server.port=80"

View File

@@ -0,0 +1,49 @@
# Use latest offical ubuntu image
FROM ubuntu:latest
# Set timezone
ENV TZ=Europe/Berlin
# Set geographic area using above variable
# This is necessary, otherwise building the image doesn't work
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# Remove annoying messages during package installation
ARG DEBIAN_FRONTEND=noninteractive
# Install packages: web server & PHP plus extensions
RUN apt-get update && apt-get install -y \
apache2 \
apache2-utils \
ca-certificates \
php \
libapache2-mod-php \
php-curl \
php-dom \
php-gd \
php-intl \
php-json \
php-mbstring \
php-xml \
php-zip && \
apt-get clean && rm -rf /var/lib/apt/lists/*
# Copy virtual host configuration from current path onto existing 000-default.conf
COPY default.conf /etc/apache2/sites-available/000-default.conf
# Remove default content (existing index.html)
RUN rm /var/www/html/*
# Activate Apache modules headers & rewrite
RUN a2enmod headers rewrite
# Ensure Group Ownership for www-data every member of kirbygroup should edit files
RUN groupadd -g 1003 kirbygroup && usermod -aG kirbygroup www-data
RUN chown -R www-data:kirbygroup /var/www/html
RUN chmod -R g+rw /var/www/html && find /var/www/html -type d -exec chmod g+xs {} \;
# Tell container to listen to port 80 at runtime
EXPOSE 80
# Start Apache web server
CMD [ "/usr/sbin/apache2ctl", "-DFOREGROUND" ]

View File

@@ -0,0 +1,9 @@
<VirtualHost *:80>
ServerName localhost
# Set the document root
DocumentRoot "/var/www/html"
<Directory "/var/www/html">
# Allow overriding the default configuration via `.htaccess`
AllowOverride All
</Directory>
</VirtualHost>

View File

@@ -0,0 +1,7 @@
#!/bin/bash
set -e -u
[[ $USERID ]] && usermod --uid "${USERID}" www-data
exec "$@"

View File

@@ -0,0 +1 @@
USERID=0

10
env/.env.all vendored Normal file
View File

@@ -0,0 +1,10 @@
##
## Einstellung die für das gesamte Projekt gelten. Also der Name und der Admin
## Das Environment muss "production","staging" oder "development" heißen
INFRASTRUCTURE_LABEL=mindboost_dev
ENVIRONMENT=development
ADMIN_USER=${INFRASTRUCTURE_LABEL}_${ENVIRONMENT}
ADMIN_PASSWORD_HASH='$2y$05$U7noO29Ru/4VB5x8TpZo3.b4VjH6AAnhufJJUG2Vs7qHCM2Cd8yIK' # for development = admin

View File

View File

@@ -16,7 +16,7 @@ TRAEFIK_SERVICE_BACKEND_PORT=8000
# Frontend
TRAEFIK_ROUTER_FRONTEND_ENTRYPOINT=websecure
TRAEFIK_ROUTER_FRONTEND_RULE=Host(`app.mindboost.team` || `mindboost.app`)
TRAEFIK_ROUTER_FRONTEND_RULE=Host(`app.mindboost.team`)
TRAEFIK_ROUTER_FRONTEND_TLS=true
TRAEFIK_ROUTER_FRONTEND_CERTRESOLVER=http_resolver
TRAEFIK_ROUTER_FRONTEND_TLS_DOMAIN_MAIN=app.mindboost.team

2
env/development/.env.administration vendored Normal file
View File

@@ -0,0 +1,2 @@
PORTAINER_IMAGE=portainer/portainer-ce:latest
PORTAINER_DATA_PATH=../../../volumes/administration/portainer/data

28
env/development/.env.backend vendored Normal file
View File

@@ -0,0 +1,28 @@
# ----------------------------------
# Redis
# ----------------------------------
REDIS_PASSWORD=laravel-redis-passwort
REDIS_PORT=6379
# ----------------------------------
# Laravel Backend
# ----------------------------------
BACKEND_NETWORK=backend
APP_NAME="mindboost backend - Compose Deployment"
APP_URL=https://backend.local
LARAVEL_PORT=8000
LARAVEL_VITE_PORT=5173
DB_HOST=${MARIADB_HOST}
DB_PORT=${MARIADB_PORT}
DB_PASSWORD=${MARIADB_PASSWORD}
DB_USERNAME=${MARIADB_USER}
DB_DATABASE=${MARIADB_DATABASE}
JWT_SECRET=zMtO8sgsnc4UixWSsYWE1pK9EdpNLzxNSoIPlUpTe6dDlarM3bu4cwM80tH3jA0F
# ----------------------------------
# Adminer
# ----------------------------------
ADMINER_PORT=8080

10
env/development/.env.database vendored Normal file
View File

@@ -0,0 +1,10 @@
# ----------------------------------
# Datenbank (MariaDB)
# ----------------------------------
MARIADB_USER=${INFRASTRUCTURE_LABEL}_${ENVIRONMENT}
MARIADB_DATABASE=${INFRASTRUCTURE_LABEL}_${ENVIRONMENT}
MARIADB_PASSWORD=1stronges-mindboostdb-passwort
MARIADB_ROOT_PASSWORD=1stronges-passwort-fuer-diedb
MARIADB_PORT=3306
MARIADB_HOST=${INFRASTRUCTURE_LABEL}_database_${ENVIRONMENT}

18
env/development/.env.develop vendored Normal file
View File

@@ -0,0 +1,18 @@
USER_UID=1000
USER_GID=1000
GITEA_VOLUME_PATH=../../../volumes/develop/gitea/gitea
GITEA_DATABASE_VOLUME_PATH=../../../volumes/develop/gitea/gitea_db
GITEA_MYSQL_ROOT_PASSWORD=very-difficult-passwort-gitea
GITEA_MYSQL_USER=gitea
GITEA_MYSQL_PASSWORD=very-difficult-gitea
GITEA_MYSQL_DATABASE=gitea
GITEA_MYSQL_ALLOW_EMPTY_PASSWORD=true
DB_HOST=gitea_db:3306
DB_NAME=gitea
DB_PASSWD=very-difficult-gitea
DB_TYPE=mysql
DB_USER=gitea

1
env/development/.env.frontend vendored Normal file
View File

@@ -0,0 +1 @@
# Frontend

48
env/development/.env.proxy vendored Normal file
View File

@@ -0,0 +1,48 @@
##
## GENERAL
##
TRAEFIK_ENABLE=true
TRAEFIK_NETWORK=proxy
TRAEFIK_BASIC_AUTH_USERS=${ADMIN_USER}:${ADMIN_PASSWORD_HASH}
TRAEFIK_CERT_RESOLVER=
##
## Domains when TRAEFIK is ENABLED
##
PORTAINER_DOMAIN=portainer.local
FRONTEND_DOMAIN=frontend.local
FRONTEND_DOMAIN_2=app.frontend.local
BACKEND_DOMAIN=backend.local
WEBSITE_DOMAIN=web.local
GITEA_DOMAIN=gitea.local
LIMESURVEY_DOMAIN=survey.local
LINKSTACK_DOMAIN=linkstack.local
TRAEFIK_DOMAIN=traefik.local
CLOUD_DOMAIN=cloud.local
### TLS for Domains
PORTAINER_TLS_DOMAIN_MAIN=${PORTAINER_DOMAIN}
FRONTEND_TLS_DOMAIN_MAIN=${FRONTEND_DOMAIN}
FRONTEND_TLS_DOMAIN_SANS=${FRONTEND_DOMAIN_2}
BACKEND_TLS_DOMAIN_MAIN=${BACKEND_DOMAIN}
WEBSITE_TLS_DOMAIN_MAIN=${WEBSITE_DOMAIN}
GITEA_TLS_DOMAIN_MAIN=${GITEA_DOMAIN}
LIMESURVEY_TLS_DOMAIN_MAIN=${LIMESURVEY_DOMAIN}
LINKSTACK_TLS_DOMAIN_MAIN=${LINKSTACK_DOMAIN}
TRAEFIK_TLS_DOMAIN_MAIN=${TRAEFIK_DOMAIN}
CLOUD_TLS_DOMAIN_MAIN=${CLOUD_DOMAIN}
##
## MIDDLEWARES
##
TRAEFIK_HTTPS_REDIRECT_MIDDLEWARE=${INFRASTRUCTURE_LABEL}-https-redirect
TRAEFIK_BASIC_AUTH_MIDDLEWARE=${INFRASTRUCTURE_LABEL}-basic-auth
##
## ENTRYPOINTS
##
TRAEFIK_ENTRYPOINT=websecure
TRAEFIK_ENTRYPOINT_HTTP=web

0
env/development/.env.tools vendored Normal file
View File

0
env/development/.env.website vendored Normal file
View File

0
env/production/.env.administration vendored Normal file
View File

1
env/production/.env.backend vendored Normal file
View File

@@ -0,0 +1 @@
${REDIS_PASSWORD}

7
env/production/.env.database vendored Normal file
View File

@@ -0,0 +1,7 @@
# ----------------------------------
# Datenbank (MariaDB)
# ----------------------------------
MARIADB_USER=${INFRASTRUCTURE_LABEL}_${ENVIRONMENT}
MARIADB_DATABASE=${INFRASTRUCTURE_LABEL}_${ENVIRONMENT}
MARIADB_PASSWORD=1stronges-mindboostdb-passwort
MARIADB_ROOT_PASSWORD=1stronges-passwort-fuer-diedb

1
env/production/.env.develop vendored Normal file
View File

@@ -0,0 +1 @@
ADMINER_PORT=8000

0
env/production/.env.frontend vendored Normal file
View File

3
env/production/.env.portainer vendored Normal file
View File

@@ -0,0 +1,3 @@
PORTAINER_IMAGE=portainer/portainer-ce:latest
PORTAINER_DATA_PATH=/opt/containers/portainer/data
PORTAINER_DOMAIN=portainer.yourdomain.com

32
env/production/.env.proxy vendored Normal file
View File

@@ -0,0 +1,32 @@
TRAEFIK_HTTPS_REDIRECT_MIDDLEWARE=${INFRASTRUCTURE_LABEL}-https-redirect
TRAEFIK_BASIC_AUTH_MIDDLEWARE=${INFRASTRUCTURE_LABEL}-basic-auth
TRAEFIK_BASIC_AUTH_USERS=${ADMIN_USER}:${ADMIN_PASSWORD_HASH}
# Service Crowdsec
SERVICES_CROWDSEC_CONTAINER_NAME=crowdsec
SERVICES_CROWDSEC_HOSTNAME=crowdsec
SERVICES_CROWDSEC_IMAGE=crowdsecurity/crowdsec
SERVICES_CROWDSEC_IMAGE_VERSION=latest
SERVICES_CROWDSEC_NETWORKS_CROWDSEC_IPV4=172.31.254.254
# Service Traefik
SERVICES_TRAEFIK_CONTAINER_NAME=${INFRASTRUCTURE_LABEL}-traefik
SERVICES_TRAEFIK_HOSTNAME=${INFRASTRUCTURE_LABEL}-traefik
SERVICES_TRAEFIK_IMAGE=traefik
SERVICES_TRAEFIK_IMAGE_VERSION=2.11
SERVICES_TRAEFIK_LABELS_TRAEFIK_HOST=`traefik.haslach2025.de`
SERVICES_TRAEFIK_NETWORKS_CROWDSEC_IPV4=172.31.254.253
SERVICES_TRAEFIK_NETWORKS_PROXY_IPV4=172.30.255.254
# Service Traefik Crowdsec Bouncer
SERVICES_TRAEFIK_CROWDSEC_BOUNCER_CONTAINER_NAME=traefik_crowdsec_bouncer
SERVICES_TRAEFIK_CROWDSEC_BOUNCER_HOSTNAME=traefik-crowdsec-bouncer
SERVICES_TRAEFIK_CROWDSEC_BOUNCER_IMAGE=fbonalair/traefik-crowdsec-bouncer
SERVICES_TRAEFIK_CROWDSEC_BOUNCER_IMAGE_VERSION=latest
SERVICES_TRAEFIK_CROWDSEC_BOUNCER_NETWORKS_CROWDSEC_IPV4=172.31.254.252
# Netzwerkeinstellungen
NETWORKS_PROXY_NAME=proxy
NETWORKS_PROXY_SUBNET_IPV4=172.30.0.0/16
NETWORKS_CROWDSEC_NAME=crowdsec
NETWORKS_CROWDSEC_SUBNET_IPV4=172.31.0.0/16

0
env/production/.env.tools vendored Normal file
View File

0
env/production/.env.website vendored Normal file
View File

View File

@@ -0,0 +1,75 @@
#!/bin/bash
# Pfad zur .env.all Datei
ENV_FILE="../env/.env.all"
# Funktion zum Überprüfen der Existenz einer Datei
check_file_exists() {
if [ ! -f "$1" ]; then
echo "Fehler: Die Datei $1 existiert nicht."
return 1
fi
}
# Überprüfe die Existenz von .env.all
check_file_exists "../env/.env.all"
# Funktion zum Auslesen von Variablen aus der .env.all Datei
get_env_var() {
grep "^$1=" "$ENV_FILE" | cut -d '=' -f2
}
# Auslesen der INFRASTRUCTURE und ENVIRONMENT Variablen
INFRASTRUCTURE=$(get_env_var "INFRASTRUCTURE_LABEL")
ENVIRONMENT=$(get_env_var "ENVIRONMENT")
# Load environment variables from the .env files
set -o allexport
source ../env/.env.all
source ../env/${ENVIRONMENT}/.env.administration
set +o allexport
# Liste Stacks
STACKS=("administration")
# Liste aller Environments
ENVIRONMENTS=("development" "staging" "production")
# Überprüfe die Existenz aller Stack-spezifischen .env Dateien
missing_files=0
for stack in "${STACKS[@]}"; do
env_file="../env/${ENVIRONMENT}/.env.${stack}"
if ! check_file_exists "$env_file"; then
missing_files=$((missing_files + 1))
fi
done
if [ $missing_files -eq 0 ]; then
echo "Alle erforderlichen .env Dateien für das ${ENVIRONMENT}-Environment sind vorhanden."
else
echo "Warnung: $missing_files .env Datei(en) fehlen. Einige Stacks könnten nicht korrekt funktionieren."
fi
# Überprüfe die Existenz aller Stack-spezifischen .env Dateien für alle Environments
for env in "${ENVIRONMENTS[@]}"; do
if [ "$env" != "$ENVIRONMENT" ]; then
for stack in "${STACKS[@]}"; do
env_file="../env/${env}/.env.${stack}"
if ! check_file_exists "$env_file"; then
echo "Warnung: Die Datei $env_file fehlt für das Environment $env."
fi
done
fi
done
# Ausgabe der Variablen
echo " "
echo "Deploying to:"
echo "INFRASTRUCTURE: ${INFRASTRUCTURE:-Not set}"
echo "ENVIRONMENT: ${ENVIRONMENT:-Not set}"
echo "-----------------------------------"
# Ausführen des Docker Compose Befehls
docker compose -f ../apps/docker-compose.all.yml --env-file ../env/.env.all --env-file ../env/${ENVIRONMENT}/.env.proxy --profile administration up --remove-orphans

105
scripts/deploy-all.sh Executable file
View File

@@ -0,0 +1,105 @@
#!/bin/bash
# Pfad zur .env.all Datei
ENV_FILE="../env/.env.all"
# Funktion zum Auslesen von Variablen aus der .env.all Datei
get_env_var() {
grep "^$1=" "$ENV_FILE" | cut -d '=' -f2
}
# Auslesen der INFRASTRUCTURE und ENVIRONMENT Variablen
INFRASTRUCTURE=$(get_env_var "INFRASTRUCTURE_LABEL")
ENVIRONMENT=$(get_env_var "ENVIRONMENT")
SERVER_IP=$(curl -s https://api.ipify.org)
# Liste aller Stacks
STACKS=("administration" "frontend" "develop" "database" "proxy" "tools" "website" "backend")
# Liste aller Environments
ENVIRONMENTS=("development" "staging" "production")
# Funktion zum Überprüfen der Existenz einer Datei
check_file_exists() {
if [ ! -f "$1" ]; then
echo "Fehler: Die Datei $1 existiert nicht."
return 1
fi
}
#!/bin/bash
# Prüfe, ob das Skript nur in der Entwicklungsumgebung ausgeführt wird
if [ "$ENVIRONMENT" == "development" ]; then
# Sicherstellen, dass acme_letsencrypt.json existiert und korrekte Berechtigungen hat
ACME_FILE="../apps/proxy/traefik/acme_letsencrypt.json"
if [ ! -f "$ACME_FILE" ]; then
echo "🔹 Erstelle $ACME_FILE..."
touch "$ACME_FILE"
fi
echo "🔹 Setze Berechtigungen für $ACME_FILE auf 600..."
chmod 600 "$ACME_FILE"
echo "🔹 ENVIRONMENT ist 'development' Hosts aus .env.proxy werden hinzugefügt und Container gestartet."
# Pfad zur Proxy-Env-Datei
ENV_PROXY_FILE="../env/development/.env.proxy"
# Hosts-Datei Pfad (Linux/macOS)
HOSTS_FILE="/etc/hosts"
# Prüfe, ob die Env-Datei existiert
if [ ! -f "$ENV_PROXY_FILE" ]; then
echo "❌ Fehler: Die Datei $ENV_PROXY_FILE existiert nicht!"
exit 1
fi
# Lese alle Zeilen, die auf *_DOMAIN= enden und extrahiere die Werte
DOMAINS=($(grep -E '^[A-Z_]+_DOMAIN=' "$ENV_PROXY_FILE" | cut -d '=' -f2))
# Füge jede Domain zur /etc/hosts Datei hinzu, falls sie fehlt
for DOMAIN in "${DOMAINS[@]}"; do
if ! grep -q "$DOMAIN" "$HOSTS_FILE"; then
echo "127.0.0.1 $DOMAIN" | sudo tee -a "$HOSTS_FILE" > /dev/null
echo "$DOMAIN zu /etc/hosts hinzugefügt"
else
echo "$DOMAIN ist bereits in /etc/hosts vorhanden"
fi
done
else
echo "❌ ENVIRONMENT ist nicht 'development' Routing über externen DNS erwartet"
fi
# Überprüfe die Existenz von .env.all
check_file_exists "../env/.env.all"
# Überprüfe die Existenz aller Stack-spezifischen .env Dateien
missing_files=0
for stack in "${STACKS[@]}"; do
env_file="../env/${ENVIRONMENT}/.env.${stack}"
if ! check_file_exists "$env_file"; then
missing_files=$((missing_files + 1))
fi
done
if [ $missing_files -eq 0 ]; then
echo "Alle erforderlichen .env Dateien sind vorhanden."
else
echo "WARNUNG: $missing_files .env Datei(en) fehlen. Einige Stacks könnten nicht korrekt funktionieren."
fi
# Ausgabe der Variablen
echo "Deploying to:"
echo "INFRASTRUCTURE: ${INFRASTRUCTURE:-Not set}"
echo "ENVIRONMENT: ${ENVIRONMENT:-Not set}"
echo "-----------------------------------"
# Check for the --build argument
BUILD_OPTION=""
if [[ "$1" == "--build" ]]; then
BUILD_OPTION="--build"
fi
# Ausführen des Docker Compose Befehls
docker compose -f ../apps/docker-compose.all.yml -p ${INFRASTRUCTURE:-my} --env-file ../env/.env.all --env-file ../env/${ENVIRONMENT}/.env.proxy --profile backend up --remove-orphans $BUILD_OPTION

View File

@@ -1,22 +1,59 @@
#!/bin/bash
set -e
echo "Prüfe, ob Traefik läuft..."
# Pfad zur .env.all Datei
ENV_FILE="../env/.env.all"
# Funktion zum Auslesen von Variablen aus der .env.all Datei
get_env_var() {
grep "^$1=" "$ENV_FILE" | cut -d '=' -f2
}
if ! docker ps --format '{{.Names}}' | grep -q 'traefik'; then
echo "Traefik läuft nicht."
read -p "Möchtest du die lokale Version zum Debuggen (docker-compose.overwrite.yml) starten? (y/n): " answer
if [[ "$answer" =~ ^[Yy]$ ]]; then
echo "Starte lokale Version..."
docker compose -f ../apps/docker-compose.overwrite.yml up -d
else
echo "Deployment abgebrochen."
exit 1
# Auslesen der INFRASTRUCTURE und ENVIRONMENT Variablen
INFRASTRUCTURE=$(get_env_var "INFRASTRUCTURE_LABEL")
ENVIRONMENT=$(get_env_var "ENVIRONMENT")
SERVER_IP=$(curl -s https://api.ipify.org)
# Liste aller Stacks
STACKS=("administration" "frontend" "develop" "database" "proxy" "tools" "website" "backend")
# Liste aller Environments
ENVIRONMENTS=("development" "staging" "production")
# Funktion zum Überprüfen der Existenz einer Datei
check_file_exists() {
if [ ! -f "$1" ]; then
echo "Fehler: Die Datei $1 existiert nicht."
return 1
fi
}
# Überprüfe die Existenz von .env.all
check_file_exists "../env/.env.all"
# Überprüfe die Existenz aller Stack-spezifischen .env Dateien
missing_files=0
for stack in "${STACKS[@]}"; do
env_file="../env/${ENVIRONMENT}/.env.${stack}"
if ! check_file_exists "$env_file"; then
missing_files=$((missing_files + 1))
fi
done
if [ $missing_files -eq 0 ]; then
echo "Alle erforderlichen .env Dateien sind vorhanden."
else
echo "Traefik läuft."
echo "Starte Deployment mit docker-compose.prod.yml..."
docker compose -f ../apps/docker-compose.prod.yml up -d
echo "WARNUNG: $missing_files .env Datei(en) fehlen. Einige Stacks könnten nicht korrekt funktionieren."
fi
echo "Deployment abgeschlossen."
# Ausgabe der Variablen
echo "Deploying to:"
echo "INFRASTRUCTURE: ${INFRASTRUCTURE:-Not set}"
echo "ENVIRONMENT: ${ENVIRONMENT:-Not set}"
echo "-----------------------------------"
# Check for the --build argument
BUILD_OPTION=""
if [[ "$1" == "--build" ]]; then
BUILD_OPTION="--build"
fi
# Ausführen des Docker Compose Befehls
docker compose -f ../apps/docker-compose.all.yml -p ${INFRASTRUCTURE:-my} --env-file ../env/.env.all --env-file ../env/${ENVIRONMENT}/.env.proxy --profile app up --remove-orphans $BUILD_OPTION

22
scripts/deploy-overwrite.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/bin/bash
set -e
echo "Prüfe, ob Traefik läuft..."
if ! docker ps --format '{{.Names}}' | grep -q 'traefik'; then
echo "Traefik läuft nicht."
read -p "Möchtest du die lokale Version zum Debuggen (docker-compose.overwrite.yml) starten? (y/n): " answer
if [[ "$answer" =~ ^[Yy]$ ]]; then
echo "Starte lokale Version..."
docker compose -f ../apps/docker-compose.overwrite.yml up -d
else
echo "Deployment abgebrochen."
exit 1
fi
else
echo "Traefik läuft."
echo "Starte Deployment mit docker-compose.prod.yml..."
docker compose -f ../apps/docker-compose.prod.yml up -d
fi
echo "Deployment abgeschlossen."

54
scripts/deploy-proxy.sh Executable file
View File

@@ -0,0 +1,54 @@
#!/bin/bash
# Pfad zur .env.all Datei
ENV_FILE="../env/.env.all"
# Funktion zum Auslesen von Variablen aus der .env.all Datei
get_env_var() {
grep "^$1=" "$ENV_FILE" | cut -d '=' -f2
}
# Auslesen der INFRASTRUCTURE und ENVIRONMENT Variablen
INFRASTRUCTURE=$(get_env_var "INFRASTRUCTURE_LABEL")
ENVIRONMENT=$(get_env_var "ENVIRONMENT")
# Liste aller Stacks
STACKS=("proxy")
# Liste aller Environments
ENVIRONMENTS=("development" "staging" "production")
# Funktion zum Überprüfen der Existenz einer Datei
check_file_exists() {
if [ ! -f "$1" ]; then
echo "Fehler: Die Datei $1 existiert nicht."
return 1
fi
}
# Überprüfe die Existenz von .env.all
check_file_exists "../env/.env.all"
# Überprüfe die Existenz aller Stack-spezifischen .env Dateien
missing_files=0
for stack in "${STACKS[@]}"; do
env_file="../env/${ENVIRONMENT}/.env.${stack}"
if ! check_file_exists "$env_file"; then
missing_files=$((missing_files + 1))
fi
done
if [ $missing_files -eq 0 ]; then
echo "Alle erforderlichen .env Dateien sind vorhanden."
else
echo "WARNUNG: $missing_files .env Datei(en) fehlen. Einige Stacks könnten nicht korrekt funktionieren."
fi
# Ausgabe der Variablen
echo "Deploying to:"
echo "INFRASTRUCTURE: ${INFRASTRUCTURE:-Not set}"
echo "ENVIRONMENT: ${ENVIRONMENT:-Not set}"
echo "-----------------------------------"
# Ausführen des Docker Compose Befehls
docker compose -f ../apps/docker-compose.all.yml --env-file ../env/.env.all --env-file ../env/${ENVIRONMENT}/.env.proxy --profile proxy up --remove-orphans

View File

@@ -1,22 +1,160 @@
#!/bin/bash
set -e
# Funktion zur Überprüfung der Produktivumgebung
is_production() {
local prod_ip="85.215.56.185" # IP-Adresse deines Produktivservers
local current_ip
# Überprüfe das Betriebssystem
case "$OSTYPE" in
msys*|cygwin*|mingw*)
# Windows
current_ip=$(ipconfig | grep -i "IPv4 Address" | head -n 1 | awk '{print $NF}')
;;
darwin*)
# macOS
current_ip=$(ipconfig getifaddr en0) # Für Wi-Fi
if [ -z "$current_ip" ]; then
current_ip=$(ipconfig getifaddr en1) # Für Ethernet
fi
;;
linux*|bsd*|solaris*)
# Linux und andere Unix-ähnliche Systeme
current_ip=$(hostname -I | awk '{print $1}')
;;
*)
echo "Unbekanntes Betriebssystem: $OSTYPE"
return 1
;;
esac
echo "Erkannte IP-Adresse: $current_ip"
if [ "$current_ip" == "$prod_ip" ]; then
echo "Produktivumgebung erkannt."
return 0 # True, wir sind in der Produktivumgebung
else
echo "Lokale Entwicklungsumgebung erkannt."
return 1 # False, wir sind in der lokalen Umgebung
fi
}
# Funktion zum Setzen der Umgebungsvariablen
set_environment_variables() {
if is_production; then
export DOMAIN_SUFFIX=".mindboost.team"
export TRAEFIK_DASHBOARD_DOMAIN="traefik${DOMAIN_SUFFIX}"
export PORTAINER_DOMAIN="portainer${DOMAIN_SUFFIX}"
export FRONTEND_DOMAIN="app${DOMAIN_SUFFIX}"
export BACKEND_DOMAIN="b${DOMAIN_SUFFIX}"
else
export DOMAIN_SUFFIX=".local"
export TRAEFIK_DASHBOARD_DOMAIN="traefik${DOMAIN_SUFFIX}"
export PORTAINER_DOMAIN="portainer${DOMAIN_SUFFIX}"
export FRONTEND_DOMAIN="frontend${DOMAIN_SUFFIX}"
export BACKEND_DOMAIN="backend${DOMAIN_SUFFIX}"
fi
}
echo "Prüfe, ob Traefik läuft..."
set_environment_variables
if ! docker ps --format '{{.Names}}' | grep -q 'traefik'; then
echo "Traefik läuft nicht. Starte Traefik mit CrowdSec Bouncer..."
if is_production; then
echo "Wir befinden uns in der Produktivumgebung."
echo "Starte Traefik und CrowdSec Bouncer mit docker-compose.traefik.prod.yml..."
env | grep DOMAIN # Debug: Zeige die gesetzten Umgebungsvariablen an
docker compose -f ../apps/proxy/docker-compose.traefik.prod.yml up -d
else
echo "Wir befinden uns in der lokalen Entwicklungsumgebung."
echo "Starte Traefik und CrowdSec Bouncer mit docker-compose.traefik.local.yml..."
env | grep DOMAIN # Debug: Zeige die gesetzten Umgebungsvariablen an
docker compose -f ../apps/docker-compose.traefik.local.yml up -d
fi
else
echo "Traefik läuft bereits. Aktualisiere die Konfiguration..."
if is_production; then
echo "Aktualisiere Traefik und CrowdSec Bouncer in der Produktivumgebung..."
docker compose -f ../apps/docker-compose.traefik.prod.yml up -d
else
echo "Aktualisiere Traefik und CrowdSec Bouncer in der lokalen Umgebung..."
docker compose -f ../apps/docker-compose.traefik.local.yml up -d
fi
fi
echo "Traefik und CrowdSec Bouncer Deployment abgeschlossen."
=================
echo "Prüfe, ob Traefik läuft..."
set_environment_variables
if ! docker ps --format '{{.Names}}' | grep -q 'traefik'; then
echo "Traefik läuft nicht. Starte Traefik und Portainer..."
else
echo "Traefik läuft bereits. Aktualisiere die Konfiguration..."
fi
if is_production; then
echo "Wir befinden uns in der Produktivumgebung."
echo "Starte/Aktualisiere Deployment mit docker-compose.prod.yml..."
env | grep DOMAIN # Debug: Zeige die gesetzten Umgebungsvariablen an
docker compose -f ../apps/docker-compose.prod.yml up -d
else
echo "Wir befinden uns in der lokalen Entwicklungsumgebung."
echo "Starte/Aktualisiere lokale Version mit docker-compose.overwrite.yml..."
env | grep DOMAIN # Debug: Zeige die gesetzten Umgebungsvariablen an
docker compose -f ../apps/docker-compose.overwrite.yml up -d
fi
if ! docker ps --format '{{.Names}}' | grep -q 'traefik'; then
echo "Traefik läuft nicht."
read -p "Möchtest du die lokale Version zum Debuggen (docker-compose.overwrite.yml) starten? (y/n): " answer
if [[ "$answer" =~ ^[Yy]$ ]]; then
echo "Starte lokale Version..."
docker compose -f ../apps/docker-compose.overwrite.yml up -d
if is_production; then
echo "Wir befinden uns in der Produktivumgebung."
set_environment_variables
echo "Starte Deployment mit docker-compose.prod.yml..."
env | grep DOMAIN # Debug: Zeige die gesetzten Umgebungsvariablen an
docker compose -f ../apps/docker-compose.prod.yml up -d
else
echo "Deployment abgebrochen."
exit 1
echo "Wir befinden uns in der lokalen Entwicklungsumgebung."
read -p "Möchtest du die lokale Version zum Debuggen (docker-compose.overwrite.yml) starten? (y/n): " answer
if [[ "$answer" =~ ^[Yy]$ ]]; then
echo "Starte lokale Version..."
set_environment_variables
env | grep DOMAIN # Debug: Zeige die gesetzten Umgebungsvariablen an
docker compose -f ../apps/docker-compose.overwrite.yml up -d
else
echo "Deployment abgebrochen."
exit 1
fi
fi
else
echo "Traefik läuft."
echo "Starte Deployment mit docker-compose.prod.yml..."
docker compose -f ../apps/docker-compose.prod.yml up -d
echo "Traefik läuft bereits."
if is_production; then
echo "Wir befinden uns in der Produktivumgebung."
set_environment_variables
echo "Aktualisiere Deployment mit docker-compose.prod.yml..."
env | grep DOMAIN # Debug: Zeige die gesetzten Umgebungsvariablen an
docker compose -f ../apps/docker-compose.prod.yml up -d
else
echo "Wir befinden uns in der lokalen Entwicklungsumgebung."
set_environment_variables
echo "Aktualisiere lokale Version mit docker-compose.overwrite.yml..."
env | grep DOMAIN # Debug: Zeige die gesetzten Umgebungsvariablen an
docker compose -f ../apps/docker-compose.overwrite.yml up -d
fi
fi
echo "Deployment abgeschlossen."
echo "Deployment abgeschlossen."