initial commit

This commit is contained in:
allard
2025-11-23 18:58:51 +01:00
commit 376a944abc
1553 changed files with 314731 additions and 0 deletions

1
GITHUBTOKEN Normal file
View File

@@ -0,0 +1 @@
3ecf481f2f51056b2a74d8ee5f6c150ea222b348

1
README.md Normal file
View File

@@ -0,0 +1 @@
Dit is een catalogus van alle yamls en values in github die gebruikt worden in de clusters van AllardDCS

21
ansible/LICENSE Executable file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2018 Pavlos Ratis
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

47
ansible/README.md Executable file
View File

@@ -0,0 +1,47 @@
# ansible-rpi-cluster
This is a basic collection of Ansible playbooks to perform common tasks in a Raspberry Pi cluster.
Feel free to contribute other common tasks that would be useful.
## Requirements
* [Ansible](http://www.ansible.com/)
## Installation
Clone the repository
git clone https://github.com/dastergon/ansible-rpi-cluster.git
Copy `hosts.sample` to `hosts`
cp hosts.sample hosts
Edit `hosts` file with your own topology. For example:
```
[all:vars]
ansible_user=pi
ansible_ssh_pass=addyourpassword
[picluster]
192.168.1.2
192.168.1.3
192.168.1.4
192.168.1.5
```
Run any of the playbooks.
## Shutdown
To shutdown all your RPis execute the following playbook:
ansible-playbook playbooks/shutdown.yml -i hosts
## Reboot
To shutdown all your RPis execute the following playbook:
ansible-playbook playbooks/reboot.yml -i hosts

2
ansible/ansible-rpi-cluster/.gitignore vendored Executable file
View File

@@ -0,0 +1,2 @@
.DS_Store
hosts

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2018 Pavlos Ratis
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,47 @@
# ansible-rpi-cluster
This is a basic collection of Ansible playbooks to perform common tasks in a Raspberry Pi cluster.
Feel free to contribute other common tasks that would be useful.
## Requirements
* [Ansible](http://www.ansible.com/)
## Installation
Clone the repository
git clone https://github.com/dastergon/ansible-rpi-cluster.git
Copy `hosts.sample` to `hosts`
cp hosts.sample hosts
Edit `hosts` file with your own topology. For example:
```
[all:vars]
ansible_user=pi
ansible_ssh_pass=addyourpassword
[picluster]
192.168.1.2
192.168.1.3
192.168.1.4
192.168.1.5
```
Run any of the playbooks.
## Shutdown
To shutdown all your RPis execute the following playbook:
ansible-playbook playbooks/shutdown.yml -i hosts
## Reboot
To shutdown all your RPis execute the following playbook:
ansible-playbook playbooks/reboot.yml -i hosts

View File

@@ -0,0 +1,11 @@
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: ansible-ansible-rpi-cluster
title: Ansible-rpi-cluster (ansible)
spec:
type: service
lifecycle: production
owner: platform-team
partOf:
- ../catalog-info.yaml

View File

@@ -0,0 +1,2 @@
[picluster]
rpi-worker 192.168.1.123

View File

@@ -0,0 +1,15 @@
---
- name: Playbook for rebooting the RPis
hosts: picluster
gather_facts: no
tasks:
- name: 'Reboot RPi'
shell: shutdown -r now
async: 0
poll: 0
ignore_errors: true
become: true
- name: "Wait for reboot to complete"
local_action: wait_for host={{ ansible_host }} port=22 state=started delay=10
become: false

View File

@@ -0,0 +1,15 @@
---
- name: Playbook for shutting down the RPis
hosts: picluster
gather_facts: no
tasks:
- name: 'Shutdown RPi'
shell: shutdown -h now
async: 0
poll: 0
ignore_errors: true
become: true
- name: "Wait for shutdown to complete"
local_action: wait_for host={{ ansible_host }} port=22 state=stopped
become: false

View File

@@ -0,0 +1,8 @@
---
- name: Playbook for managing the updates in RPi
hosts: picluster
tasks:
- name: 'Update apt package cache'
become: yes
apt:
update_cache=yes

View File

@@ -0,0 +1,11 @@
---
- name: Playbook for upgrading the RPis
hosts: picluster
gather_facts: no
tasks:
- name: Update and upgrade apt packages
become: true
apt:
upgrade: yes
update_cache: yes
cache_valid_time: 86400

12
ansible/catalog-info.yaml Normal file
View File

@@ -0,0 +1,12 @@
apiVersion: backstage.io/v1alpha1
kind: System
metadata:
name: ansible
title: Ansible System
spec:
owner: platform-team
partOf:
- ../catalog-info.yaml
subcomponents:
- ./playbooks/catalog-info.yaml
- ./ansible-rpi-cluster/catalog-info.yaml

8
ansible/hosts Executable file
View File

@@ -0,0 +1,8 @@
[picluster]
pirtrwsv01 192.168.40.100
pisvrwsv01 192.168.40.101
pisvrwsv02 192.168.40.102
pisvrwsv03 192.168.40.103
pisvrwsv04 192.168.40.104
pisvrwsv05 192.168.40.105
pisvrwsv06 192.168.40.106

9
ansible/hosts.sample Executable file
View File

@@ -0,0 +1,9 @@
<document start>
[picluster]
pirtrwsv01 192.168.40.100
pisvrwsv01 192.168.40.101
pisvrwsv02 192.168.40.102
pisvrwsv03 192.168.40.103
pisvrwsv04 192.168.40.104
pisvrwsv05 192.168.40.105
pisvrwsv06 192.168.40.106

View File

@@ -0,0 +1,11 @@
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: ansible-playbooks
title: Playbooks (ansible)
spec:
type: service
lifecycle: production
owner: platform-team
partOf:
- ../catalog-info.yaml

15
ansible/playbooks/reboot.yml Executable file
View File

@@ -0,0 +1,15 @@
---
- name: Playbook for rebooting the RPis
hosts: picluster
gather_facts: no
tasks:
- name: 'Reboot RPi'
shell: shutdown -r now
async: 0
poll: 0
ignore_errors: true
become: true
- name: "Wait for reboot to complete"
local_action: wait_for host={{ ansible_host }} port=22 state=started delay=10
become: false

15
ansible/playbooks/shutdown.yml Executable file
View File

@@ -0,0 +1,15 @@
---
- name: Playbook for shutting down the RPis
hosts: picluster
gather_facts: no
tasks:
- name: 'Shutdown RPi'
shell: shutdown -h now
async: 0
poll: 0
ignore_errors: true
become: true
- name: "Wait for shutdown to complete"
local_action: wait_for host={{ ansible_host }} port=22 state=stopped
become: false

8
ansible/playbooks/update.yml Executable file
View File

@@ -0,0 +1,8 @@
---
- name: Playbook for managing the updates in RPi
hosts: picluster
tasks:
- name: 'Update apt package cache'
become: yes
apt:
update_cache=yes

11
ansible/playbooks/upgrade.yml Executable file
View File

@@ -0,0 +1,11 @@
---
- name: Playbook for upgrading the RPis
hosts: picluster
gather_facts: no
tasks:
- name: Update and upgrade apt packages
become: true
apt:
upgrade: yes
update_cache: yes
cache_valid_time: 86400

11
asus/catalog-info.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: backstage.io/v1alpha1
kind: System
metadata:
name: asus
title: Asus System
spec:
owner: platform-team
partOf:
- ../catalog-info.yaml
subcomponents:
- ./mongodb-sharded/catalog-info.yaml

View File

@@ -0,0 +1,62 @@
#VOORAF:
========
De processor moet avx instructieset aankunnen.
Dit kun je controleren door het volgende commando uit te voeren in je virtualbx vm:
cat /proc/cpuinfo | grep flags
output:
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good
nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid
sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm
3dnowprefetch pti fsgsbase bmi1 avx2 bmi2 invpcid rdseed adx clflushopt arat
Je moet dan avx en sse4_2 tussen de flags zien staan:
anders moet je op de windows host het commando
bcdedit /set hypervisorlaunchtype off
uitvoeren.
#INSTALLATIE
============
helm install mongodb bitnami/mongodb-sharded -n mongodb -f values.yaml
OUTPUT:
NAME: mongodb
LAST DEPLOYED: Mon Jul 7 09:10:41 2025
NAMESPACE: mongodb
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mongodb-sharded
CHART VERSION: 9.1.1
APP VERSION: 8.0.4
The MongoDB&reg; Sharded cluster can be accessed via the Mongos instances in port 27017 on the following DNS name from within your cluster:
mongodb-mongodb-sharded.mongodb.svc.cluster.local
To get the root password run:
export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace mongodb mongodb-mongodb-sharded -o jsonpath="{.data.mongodb-root-password}" | base64 -d)
To connect to your database run the following command:
kubectl run --namespace mongodb mongodb-mongodb-sharded-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb-sharded:8.0.4-debian-12-r1 --command -- mongosh admin --host mongodb-mongodb-sharded --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace mongodb svc/mongodb-mongodb-sharded 27017:27017 &
mongosh --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD
WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs:
- configsvr.resources
- mongos.resources
- shardsvr.dataNode.resources
+info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

View File

@@ -0,0 +1,11 @@
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: asus-mongodb-sharded
title: Mongodb-sharded (asus)
spec:
type: service
lifecycle: production
owner: platform-team
partOf:
- ../catalog-info.yaml

View File

@@ -0,0 +1,16 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mongo-express-lp.allarddcs.nl-tls
namespace: mongodb
spec:
dnsNames:
- mongo-express-lp.allarddcs.nl
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: letsencrypt
secretName: mongo-express-lp.allarddcs.nl-tls
usages:
- digital signature
- key encipherment

View File

@@ -0,0 +1 @@
microk8s kubectl expose deployment mongodb-mongodb-sharded --type=NodePort --name=mongodb-nodeport -n mongodb --port=27017

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: mongodb
spec:
ports:
- protocol: TCP
port: 27017
targetPort: 27017
---
apiVersion: v1
kind: Endpoints
metadata:
name: mongo
namespace: mongodb
subsets:
- addresses:
- ip: 192.168.2.109
ports:
- port: 30513

View File

@@ -0,0 +1,35 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
env:
- name: ME_CONFIG_MONGODB_URL
value: "mongodb://root:Mongodb01@192.168.2.109:30981"
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express
spec:
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30801
selector:
app: mongo-express

View File

@@ -0,0 +1,43 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express:latest
env:
- name: ME_CONFIG_MONGODB_SERVER
value: "mongo.mongodb.svc.cluster.local"
- name: ME_CONFIG_MONGODB_PORT
value: "27017"
- name: ME_CONFIG_BASICAUTH_USERNAME
value: "admin"
- name: ME_CONFIG_BASICAUTH_PASSWORD
value: "pass"
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express
spec:
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30081
selector:
app: mongo-express

View File

@@ -0,0 +1,63 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-sharded-express
namespace: nodejs
labels:
app: mongo-sharded-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-sharded-express
template:
metadata:
labels:
app: mongo-sharded-express
spec:
containers:
- name: mongo-sharded-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_OPTIONS_EDITORTHEME
value: "ambiance"
- name: ME_CONFIG_BASICAUTH_USERNAME
value: "admin"
- name: ME_CONFIG_BASICAUTH_PASSWORD
value: "Mongodb01"
- name: ME_CONFIG_MONGODB_URL
value: "mongodb://root:Mongodb01@192.168.2.109:30981"
---
apiVersion: v1
kind: Service
metadata:
name: mongo-sharded-express
namespace: nodejs
labels:
name: mongo-sharded-express
spec:
type: ClusterIP
ports:
- port: 8081
name: http
selector:
app: mongo-sharded-express
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: mongo-sharded-express-tls
namespace: nodejs
spec:
entryPoints:
- websecure
routes:
- match: Host(`mongoshardedexpress-prod.allarddcs.nl`)
kind: Rule
services:
- name: mongo-sharded-express
port: 8081
tls:
certResolver: letsencrypt

File diff suppressed because it is too large Load Diff

8
catalog-importer.yaml Normal file
View File

@@ -0,0 +1,8 @@
apiVersion: backstage.io/v1alpha1
kind: Location
metadata:
name: kubernetes-recursive-import
description: "Recursively import all catalog-info.yaml files from this repo"
spec:
targets:
- ./**/catalog-info.yaml

35
catalog-info.yaml Normal file
View File

@@ -0,0 +1,35 @@
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: kubernetes-GITHUB
namespace: default
description: All deployment van Kubernetes clusters in github
annotations:
backstage.io/techdocs-ref: dir:./docs
links:
- url: https://github.com/AllardKrings/kubernetes
title: AllardDCS Kubernetes Configuration
docs:
- url: ./docs/README.md
spec:
type: service
lifecycle: production
owner: group:default/allarddcs
repo: https://github.com/AllardKrings/kubernetes.git
techdocs:
url: ./docs
children:
- ./odroid/catalog-info.yaml
- ./riscv/catalog-info.yaml
- ./asus/catalog-info.yaml
- ./dev/catalog-info.yaml
- ./scripts/catalog-info.yaml
- ./docs/catalog-info.yaml
- ./kubernetes/catalog-info.yaml
- ./diversen/catalog-info.yaml
- ./lp/catalog-info.yaml
- ./node_modules/catalog-info.yaml
- ./.git/catalog-info.yaml
- ./temp/catalog-info.yaml
- ./prod/catalog-info.yaml
- ./ansible/catalog-info.yaml

1
dev.null Normal file
View File

@@ -0,0 +1 @@
Now using node v20.19.5 (npm v10.8.2)

8
dev/README.md Normal file
View File

@@ -0,0 +1,8 @@
Alle configuraties van DEV-cluster:
argocd crate elasticsearch-kibana kafka pgadmin postgres16 tekton
backstage defectdojo gitea kubernetes phpmyadmin prometheus traefik
camunda deptrack grafana mariadb portainer rabbitmq trivy
catalog-info.yaml dnsutils harbor nexus postgres13 redis zabbix
cockroachdb docs hercules nginx postgres14 redmine
cosign drupal itop olproperties postgres15 sonarqube

2
dev/argocd/.argocdignore Normal file
View File

@@ -0,0 +1,2 @@
catalog-info.yaml
catalog-info.yml

120
dev/argocd/README.md Executable file
View File

@@ -0,0 +1,120 @@
#Installatie:
kubectl create ns argocd
#certificaat aanmaken:
kubectl apply -f argocd-certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: argocd-tls-cert
namespace: argocd
spec:
secretName: argocd-tls-cert
dnsNames:
- argocd-dev.allarddcs.nl
issuerRef:
name: letsencrypt
kind: ClusterIssuer
Hier wordt dus een Certificate aangemaakt met naam "argocd-tls-cert":
NAME TYPE DATA AGE
argocd-tls-cert kubernetes.io/tls 2 76m
dat is opgeslagen in een secret "argocd-tls-cert":
NAME READY SECRET AGE
argocd-tls-cert True argocd-tls-cert 76m
#installeren via helm
helm install argocd -f values.yaml argo-cd/argo-cd -n argocd -f values.yaml
#values.yaml:
ingress:
server:
enabled: true
ingressClassName: traefik
hosts:
- host: argocd-dev.allarddcs.nl
paths:
- "/"
tls:
- hosts:
- argocd-dev.allarddcs.nl
secretName: argocd-tls-cert
configs:
params:
# disable insecure (HTTP)
server.insecure: "false"
server:
tls:
enabled: true
# name of the TLS secret (created via cert-manager)
secretName: argocd-tls-cert
Dit zorgt ervoor dat het eerder aangemaakte certificaat wordt gebruikt door argocd en
dat alleen verkeer via poort 443 mogelijk is.
#ingressroutes:
- door het LP-cluster loopt een ingressrouteTCP met tls: passtrough: true.
- in het DEV-cluster is alleen de ingressrouteTCP nodig:
apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
name: argocd-route-tcp
namespace: argocd
spec:
entryPoints:
- websecure
routes:
- match: HostSNI(`argocd-dev.allarddcs.nl`)
priority: 10
services:
- name: argocd-server
port: 443
- match: HostSNI(`argocd-dev.allarddcs.nl`) && Headers(`Content-Type`, `application/grpc`)
priority: 11
services:
- name: argocd-server
port: 443
tls:
passthrough: true
of het tweede deel nodig is en werkt weet ik niet zeker. In ieder geval doet traefik GEEN tls-interrupt.
#Initieel password opvragen:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
#gitea repository koppelen:
Checken of de repository in git aanwezig is.
project: default
https://gitea-dev.allarddcs.nl/AllardDCS/dev/olproperties (ZONDER.git!!!)
user: allard
password: Gitea01@
#applicatie toevoegen:
repository invullen
pad toevoegen (olproperties)
#api testen:
er staat een argocd binary op pisvrwsv00
argcd login https://argocd-dev.allarddcs
argocd app list
#task argocd-sync-and-wait installeren:
kubectl apply -f argocd-task-sync-and-wait.yaml
#testen kan met:
kubectl apply -f argocd-pipeline.yaml
kubectl create -f argocd-pipeline-run.yaml

View File

@@ -0,0 +1,12 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: argocd-tls-cert
namespace: argocd
spec:
secretName: argocd-tls-cert
dnsNames:
- argocd-dev.allarddcs.nl
issuerRef:
name: letsencrypt
kind: ClusterIssuer

View File

@@ -0,0 +1,17 @@
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
annotations:
argocd.argoproj.io/hook: Skip
name: dev-argocd
title: Argocd (dev)
description: ArgoCD-configuratie
annotations:
backstage.io/kubernetes-label-selector: "app=argocd"
spec:
type: service
owner: allarddcs
subcomponentOf: component:default/DEV-cluster
lifecycle: production
docs:
path: ./README.md

View File

@@ -0,0 +1,15 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: argocd-http
namespace: argocd
spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host("argocd-dev.allarddcs.nl")
services:
- kind: Service
name: argocd-server
port: 80

View File

@@ -0,0 +1,26 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: argocd-tls
namespace: argocd
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`argocd-dev.allarddcs.nl`)
priority: 10
services:
- kind: Service
name: argocd-server
port: 80
# - kind: Rule
# match: Host(`argocd-dev.allarddcs.nl`) && Headers(`Content-Type`, `application/grpc`)
# priority: 11
# services:
# - kind: Service
# name: argocd-server
# port: 80
# scheme: h2c
tls:
certResolver: letsencrypt

View File

@@ -0,0 +1,17 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: argocd-tls
namespace: argocd
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`argocd-dev.allarddcs.nl`)
services:
- kind: Service
name: argocd-server
port: 443
tls:
certResolver: letsencrypt

View File

@@ -0,0 +1,16 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: argocd-web
namespace: argocd
spec:
entryPoints:
- websecure
routes:
- match: Host(`argocd-dev.allarddcs.nl`)
kind: Rule
services:
- name: argocd-server
port: 443
tls:
secretName: argocd-tls-cert

View File

@@ -0,0 +1,22 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
name: argocd-route-tcp
namespace: argocd
spec:
entryPoints:
- websecure
routes:
- match: HostSNI(`argocd-dev.allarddcs.nl`)
priority: 10
services:
- name: argocd-server
port: 443
- match: HostSNI(`argocd-dev.allarddcs.nl`) && Headers(`Content-Type`, `application/grpc`)
priority: 11
services:
- name: argocd-server
port: 443
tls:
passthrough: true

View File

@@ -0,0 +1,46 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: argocd-http
namespace: argocd
spec:
entryPoints:
- web
routes:
- match: Host(`argocd.example.com`)
kind: Rule
middlewares:
- name: redirect-to-https
services:
- name: argocd-server
port: 80
scheme: https
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: redirect-to-https
namespace: argocd
spec:
redirectScheme:
scheme: https
permanent: true
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: argocd-https
namespace: argocd
spec:
entryPoints:
- websecure
routes:
- match: Host(`argocd.example.com`)
kind: Rule
services:
- name: argocd-server
port: 80
scheme: https
tls:
certResolver: letsencrypt

4264
dev/argocd/values.org Normal file

File diff suppressed because it is too large Load Diff

25
dev/argocd/values.yaml Normal file
View File

@@ -0,0 +1,25 @@
ingress:
server:
enabled: true
ingressClassName: traefik
hosts:
- host: argocd-dev.allarddcs.nl
paths:
- "/"
tls:
- hosts:
- argocd-dev.allarddcs.nl
secretName: argocd-tls-cert
configs:
params:
# disable insecure (HTTP)
server.insecure: "false"
server:
tls:
enabled: true
# name of the TLS secret (created via cert-manager)
secretName: argocd-tls-cert
# If you want HA, you can also configure replicas, etc.

View File

@@ -0,0 +1,2 @@
catalog-info.yaml
catalog-info.yml

24
dev/backstage/README.md Normal file
View File

@@ -0,0 +1,24 @@
#build container
setup.sh is een script waarmee vanuit de backstage git repo een docker image wordt gebouwd met daarin:
github, gitea, techdocs
#installatie
kubectl apply -f backstage.yaml
maakt connectie met postgres13 database
#na installatie:
als database connectie niet werkt controleren welke connectie-parameters geladen zijn door in de container:
node -e "console.log(require('knex')({
client: 'pg',
connection: process.env.DATABASE_URL
}).raw('select 1+1'))"
uit te voeren. Als je dan "connection undefined" ziet weet je hoe laat het is.

View File

@@ -0,0 +1,16 @@
# backstage-private-users-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: backstage-private-users
namespace: backstage
data:
allardkrings.yaml: |
apiVersion: backstage.io/v1alpha1
kind: User
metadata:
name: AllardKrings # must match GitHub username
email: admin@allarddcs.nl
spec:
memberOf:
- team:AllardDCS

View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: backstage-secrets
namespace: backstage
type: Opaque
data:
GITEA_TOKEN: N2MyODlkODliMDI0ODk5ODRmYzk4NTA0MTFiYjI2ZjZlZTRlOWQzNw==

View File

@@ -0,0 +1,109 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: backstage
namespace: backstage
labels:
backstage.io/kubernetes-id: backstage
spec:
replicas: 1
selector:
matchLabels:
app: backstage
template:
metadata:
labels:
app: backstage
backstage.io/kubernetes-id: backstage
spec:
serviceAccountName: backstage
containers:
- name: backstage
image: allardkrings/backstage:1.44.0
imagePullPolicy: Always
env:
- name: PORT
value: "7007"
- name: POSTGRES_USER
value: backstage
- name: POSTGRES_PASSWORD
value: backstage
- name: POSTGRES_DB
value: backstage
- name: POSTGRES_SERVICE_HOST
value: postgres13.postgres.svc.cluster.local
- name: POSTGRES_SERVICE_PORT
value: "5432"
- name: APP_CONFIG_auth_environment
value: development
- name: NODE_ENV
value: development
- name: GITHUB_TOKEN
valueFrom:
secretKeyRef:
name: github-token
key: GITHUB_TOKEN
- name: GITEA_TOKEN
valueFrom:
secretKeyRef:
name: gitea-token
key: GITEA_TOKEN
volumeMounts:
# Mount the configmap as a single file
- mountPath: /app/app-config.production.yaml
subPath: app-config.yaml
name: app-configmap
# Mount the PVC as the TechDocs storage directory
- mountPath: /tmp/techdocs-storage
name: techdocs-storage
- name: private-users
mountPath: /backstage/catalog/private-users
volumes:
# ConfigMap for app config
- name: app-configmap
configMap:
name: backstage-app-config
# PVC for TechDocs storage
- name: techdocs-storage
persistentVolumeClaim:
claimName: backstage-pvc
- name: private-users
configMap:
name: backstage-private-users
---
apiVersion: v1
kind: Service
metadata:
name: backstage
namespace: backstage
labels:
backstage.io/kubernetes-id: backstage
spec:
type: ClusterIP
selector:
app: backstage
ports:
- name: http
port: 7007
targetPort: 7007
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: backstage-tls
namespace: backstage
labels:
backstage.io/kubernetes-id: backstage
spec:
entryPoints:
- websecure
routes:
- match: Host(`backstage-dev.allarddcs.nl`)
kind: Rule
services:
- name: backstage
port: 7007
tls:
secretName: backstage-dev.allarddcs.nl-tls

View File

@@ -0,0 +1,19 @@
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: dev-backstage
title: Backstage (dev)
description: Backstage instance running in Kubernetes
annotations:
backstage.io/kubernetes-id: backstage
links:
- url: https://github.com/AllardKrings/kubernetes/dev/backstage
title: backstage-configuratie
docs:
- url: ./README.md
spec:
type: service
lifecycle: production
owner: group:default/allarddcs
subcomponentOf: component:default/DEV-cluster

16
dev/backstage/certificate.yaml Executable file
View File

@@ -0,0 +1,16 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: backstage-dev.allarddcs.nl-tls
namespace: backstage
spec:
dnsNames:
- backstage-dev.allarddcs.nl
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: letsencrypt
secretName: backstage-dev.allarddcs.nl-tls
usages:
- digital signature
- key encipherment

View File

@@ -0,0 +1,75 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: backstage
namespace: backstage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: backstage-k8s-reader
rules:
- apiGroups: [""]
resources:
- pods
- services
- configmaps
- namespaces
- endpoints
- limitranges
- resourcequotas
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources:
- deployments
- replicasets
- statefulsets
- daemonsets
verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources:
- ingresses
verbs: ["get", "list", "watch"]
- apiGroups: ["autoscaling"]
resources:
- horizontalpodautoscalers
verbs: ["get", "list", "watch"]
- apiGroups: ["metrics.k8s.io"]
resources:
- pods
verbs: ["get", "list"]
- apiGroups: ["traefik.containo.us"]
resources:
- ingressroutes
- ingressroutetcps
- ingressrouteudps
- middlewares
- middlewarestraefikio
- tlsoptions
- tlsstores
- traefikservices
- serverstransports
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: backstage-k8s-reader-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: backstage-k8s-reader
subjects:
- kind: ServiceAccount
name: backstage
namespace: backstage

View File

@@ -0,0 +1,143 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: backstage-app-config
namespace: backstage
data:
app-config.yaml: |
app:
title: Backstage AllardDCS
baseUrl: https://backstage-dev.allarddcs.nl
extensions:
- entity-content:kubernetes/kubernetes
backend:
baseUrl: https://backstage-dev.allarddcs.nl
env:
PATH: /usr/local/bin:$PATH
listen:
port: 7007
cors:
origin: https://backstage-dev.allarddcs.nl
methods: [GET, POST, PUT, DELETE, PATCH]
credentials: true
csp:
connect-src: ["'self'", "http:", "https:"]
database:
client: pg
connection:
host: ${POSTGRES_SERVICE_HOST}
port: ${POSTGRES_SERVICE_PORT}
user: ${POSTGRES_USER}
password: ${POSTGRES_PASSWORD}
database: ${POSTGRES_DB}
reading:
allow:
- host: raw.githubusercontent.com
paths:
- /
cache:
memory: {}
trustProxy: true
log:
level: debug
logging:
logLevel: info
loggers:
catalog:
level: debug
backend:
level: debug
techdocs:
builder: 'local'
publisher:
type: 'local'
generator:
runIn: local
organization:
name: AllardDCS
permission:
rules:
- allow:
users:
- AllardKrings
integrations:
gitea:
- host: gitea-dev.allarddcs.nl
baseUrl: https://gitea-dev.allarddcs.nl
apiBaseUrl: https://gitea-dev.allarddcs.nl/api/v1
token:
$env: GITEA_TOKEN
github:
- host: github.com
token:
$env: GITHUB_TOKEN
catalog:
providers:
github:
myGithub:
organization: 'AllardKrings'
catalogPath: '/**/catalog-info.yaml'
filters:
branch: 'master'
repository: 'kubernetes'
schedule:
frequency: { minutes: 30 }
timeout: { minutes: 3 }
gitea:
myGitea:
organization: 'allarddcs'
host: gitea-dev.allarddcs.nl
branch: 'master'
catalogPath: '/**/catalog-info.yaml'
schedule:
frequency: { minutes: 30 }
timeout: { minutes: 3 }
locations:
- type: url
target: https://gitea-dev.allarddcs.nl/AllardDCS/kubernetes/raw/branch/master/group.yaml
rules:
- allow: [Group]
- type: file
target: /backstage/catalog/private-users/allardkrings.yaml
rules:
- allow: [User]
processors:
gitea:
- host: gitea-dev.allarddcs.nl
apiBaseUrl: https://gitea-dev.allarddcs.nl/api/v1
kubernetes:
serviceLocatorMethod:
type: multiTenant
clusterLocatorMethods:
- type: config
clusters:
- name: local-cluster
url: https://kubernetes.default.svc
authProvider: serviceAccount
auth:
# see https://backstage.io/docs/auth/ to learn about auth providers
environment: development
providers:
# See https://backstage.io/docs/auth/guest/provider
guest: {}
github:
development:
clientId: Ov23lilVTWftNp9vMFwB
clientSecret: a687566a8d4871d30fe0126f150515531969d5fc
usePopup: false
signIn:
resolvers:
# Matches the GitHub username with the Backstage user entity name.
# See https://backstage.io/docs/auth/github/provider#resolvers for more resolvers.
- resolver: usernameMatchingUserEntityName

View File

@@ -0,0 +1,2 @@
microk8s kubectl create secret generic gitea-token -n backstage \
--from-literal=GITEA_TOKEN=7c289d89b02489984fc9850411bb26f6ee4e9d37

View File

@@ -0,0 +1 @@
7c289d89b02489984fc9850411bb26f6ee4e9d37

View File

@@ -0,0 +1,9 @@
apiVersion: v1
kind: Secret
metadata:
name: postgres-secrets
namespace: backstage
type: Opaque
data:
POSTGRES_USER: YmFja3N0YWdlCg==
POSTGRES_PASSWORD: YmFja3N0YWdlCg==

34
dev/backstage/pvc.yaml Normal file
View File

@@ -0,0 +1,34 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: backstage-pv
spec:
storageClassName: ""
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: 192.168.2.110
path: /mnt/nfs_share/backstage/dev
readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: backstage-pvc
namespace: backstage
spec:
storageClassName: ""
volumeName: backstage-pv
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 1Gi

3
dev/backstage/restart.sh Executable file
View File

@@ -0,0 +1,3 @@
microk8s kubectl apply -f configmap.yaml
microk8s kubectl rollout restart deploy/backstage -n backstage
microk8s kubectl get pod -n backstage

View File

@@ -0,0 +1,222 @@
#!/bin/bash
set -e
# ------------------------
# Load NVM
# ------------------------
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
# Use Node 20
nvm use 20
# ------------------------
# Configuration
# ------------------------
APP_NAME="backstage"
APP_DIR="$PWD/$APP_NAME"
echo "=== 1. Creating Backstage app ==="
# Use --ignore-existing to avoid cached/missing binaries
npx --ignore-existing @backstage/create-app@latest "$APP_DIR"
cd "$APP_DIR"
echo "=== 2. Bumping Backstage version to 1.42.1 ==="
yarn backstage-cli versions:bump --release 1.42.1
echo "=== 3. Installing plugin dependencies (Yarn 4 compatible) ==="
# Backend plugins
yarn --cwd packages/backend add \
@backstage/plugin-techdocs-backend \
@backstage/plugin-catalog-backend-module-github \
@backstage/plugin-catalog-backend-module-gitea \
@backstage/plugin-devtools-backend
# Frontend plugins
yarn --cwd packages/app add \
@backstage/plugin-techdocs \
@backstage/plugin-catalog \
@backstage/plugin-catalog-graph \
@backstage/plugin-techdocs-module-addons-contrib
echo "=== 4. Patching backend/src/index.ts ==="
BACKEND_FILE=packages/backend/src/index.ts
cat > "$BACKEND_FILE" <<'EOF'
import { createBackend } from '@backstage/backend-defaults';
import { createBackendFeatureLoader } from '@backstage/backend-plugin-api';
const backend = createBackend();
// Catalog
backend.add(import('@backstage/plugin-catalog-backend'));
backend.add(import('@backstage/plugin-catalog-backend-module-scaffolder-entity-model'));
backend.add(import('@backstage/plugin-catalog-backend-module-unprocessed'));
backend.add(import('@backstage/plugin-catalog-backend-module-github'));
backend.add(import('@backstage/plugin-catalog-backend-module-gitea'));
backend.add(import('@backstage/plugin-devtools-backend'));
// Scaffolder
backend.add(import('@backstage/plugin-scaffolder-backend'));
backend.add(import('@backstage/plugin-scaffolder-backend-module-github'));
backend.add(import('@backstage/plugin-scaffolder-backend-module-notifications'));
// Auth
backend.add(import('@backstage/plugin-auth-backend'));
backend.add(import('@backstage/plugin-auth-backend-module-guest-provider'));
// TechDocs
backend.add(import('@backstage/plugin-techdocs-backend'));
// Kubernetes
backend.add(import('@backstage/plugin-kubernetes-backend'));
// Search
const searchLoader = createBackendFeatureLoader({
*loader() {
yield import('@backstage/plugin-search-backend');
yield import('@backstage/plugin-search-backend-module-catalog');
yield import('@backstage/plugin-search-backend-module-techdocs');
},
});
backend.add(searchLoader);
// Misc
backend.add(import('@backstage/plugin-devtools-backend'));
backend.add(import('@backstage/plugin-app-backend'));
backend.add(import('@backstage/plugin-proxy-backend'));
backend.add(import('@backstage/plugin-permission-backend'));
backend.add(import('@backstage/plugin-permission-backend-module-allow-all-policy'));
backend.add(import('@backstage/plugin-notifications-backend'));
backend.add(import('@backstage/plugin-events-backend'));
backend.start();
EOF
echo "✓ Backend patched."
echo "=== 5. Patching packages/app/src/App.tsx ==="
APP_FILE=packages/app/src/App.tsx
cat > "$APP_FILE" <<'EOF'
import React from 'react';
import { createApp } from '@backstage/app-defaults';
import { FlatRoutes } from '@backstage/core-app-api';
import { Route, Navigate } from 'react-router-dom';
import { CatalogIndexPage, CatalogEntityPage } from '@backstage/plugin-catalog';
import { CatalogGraphPage } from '@backstage/plugin-catalog-graph';
import { ApiExplorerPage } from '@backstage/plugin-api-docs';
import { TechDocsIndexPage, TechDocsReaderPage } from '@backstage/plugin-techdocs';
import { ScaffolderPage } from '@backstage/plugin-scaffolder';
import { SearchPage } from '@backstage/plugin-search';
import { UserSettingsPage } from '@backstage/plugin-user-settings';
const app = createApp();
const routes = (
<FlatRoutes>
<Route path="/" element={<Navigate to="/catalog" />} />
<Route path="/catalog" element={<CatalogIndexPage />} />
<Route path="/catalog/:namespace/:kind/:name" element={<CatalogEntityPage />} />
<Route path="/catalog-graph" element={<CatalogGraphPage />} />
<Route path="/api-docs" element={<ApiExplorerPage />} />
<Route path="/docs" element={<TechDocsIndexPage />} />
<Route path="/docs/:namespace/:kind/:name/*" element={<TechDocsReaderPage />} />
<Route path="/search" element={<SearchPage />} />
<Route path="/create" element={<ScaffolderPage />} />
<Route path="/settings" element={<UserSettingsPage />} />
</FlatRoutes>
);
export default app.createRoot(routes);
EOF
echo "✓ App.tsx patched."
echo "=== 6. Installing all dependencies ==="
# Yarn 4 uses --immutable instead of --frozen-lockfile
yarn install --immutable
echo "=== 7. Building backend artifacts ==="
yarn workspace backend build
# Verify the build output
if [ ! -f packages/backend/dist/bundle.tar.gz ] || [ ! -f packages/backend/dist/skeleton.tar.gz ]; then
echo "❌ Backend build failed: required files not found!"
exit 1
fi
echo "✓ Backend build complete."
# -----------------------------
# 8a. Patch backend Dockerfile to include TechDocs/MkDocs + Yarn 4 support
# -----------------------------
DOCKERFILE=packages/backend/Dockerfile
cat > "$DOCKERFILE" <<'EOF'
# This dockerfile builds an image for the backend package.
# It should be executed with the root of the repo as docker context.
#
# Before building this image, be sure to have run the following commands in the repo root:
#
# yarn install
# yarn tsc
# yarn build:backend
#
# Once the commands have been run, you can build the image using `yarn build-image`
FROM node:20-bookworm-slim
# Install sqlite3 dependencies. You can skip this if you don't use sqlite3 in the image,
# in which case you should also move better-sqlite3 to "devDependencies" in package.json.
# Additionally, we install dependencies for `techdocs.generator.runIn: local`.
# https://backstage.io/docs/features/techdocs/getting-started#disabling-docker-in-docker-situation-optional
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev python3 python3-pip python3-venv build-essential && \
yarn config set python /usr/bin/python3
# Set up a virtual environment for mkdocs-techdocs-core.
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN pip3 install mkdocs-techdocs-core==1.1.7
# From here on we use the least-privileged `node` user to run the backend.
WORKDIR /app
RUN chown node:node /app
USER node
# This switches many Node.js dependencies to production mode.
ENV NODE_ENV=production
# Copy over Yarn 3 configuration, release, and plugins
COPY --chown=node:node .yarn ./.yarn
COPY --chown=node:node .yarnrc.yml ./
# Copy repo skeleton first, to avoid unnecessary docker cache invalidation.
# The skeleton contains the package.json of each package in the monorepo,
# and along with yarn.lock and the root package.json, that's enough to run yarn install.
COPY --chown=node:node yarn.lock package.json packages/backend/dist/skeleton.tar.gz ./
RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz
RUN --mount=type=cache,target=/home/node/.yarn/berry/cache,sharing=locked,uid=1000,gid=1000 \
yarn workspaces focus --all --production
# Then copy the rest of the backend bundle, along with any other files we might want.
COPY --chown=node:node packages/backend/dist/bundle.tar.gz app-config*.yaml ./
RUN tar xzf bundle.tar.gz && rm bundle.tar.gz
CMD ["node", "packages/backend", "--config", "app-config.yaml"]
EOF
echo "✓ Backend Dockerfile patched with TechDocs + Yarn 4 support."
echo "=== 8. Building backend Docker image ==="
yarn workspace backend build-image
echo "✅ Backstage 1.42.1 setup complete with TechDocs!"
echo "Run with: docker run -p 7007:7007 <image_name>"

178
dev/backstage/setup.sh Executable file
View File

@@ -0,0 +1,178 @@
#!/usr/bin/env bash
set -euo pipefail
# ------------------------
# Configuration
# ------------------------
APP_NAME="backstage"
APP_DIR="$PWD/$APP_NAME"
BACKSTAGE_RELEASE="1.42.1"
NODE_VERSION_MIN=18
echo
echo "=== Backstage automated setup script ==="
echo "App dir: $APP_DIR"
echo "Target Backstage release: $BACKSTAGE_RELEASE"
echo
# Quick environment checks
command -v node >/dev/null 2>&1 || echo "Warning: node not found (need >= ${NODE_VERSION_MIN})"
command -v yarn >/dev/null 2>&1 || echo "Warning: yarn not found"
# ------------------------
# 1) Create Backstage app
# ------------------------
if [ -d "$APP_DIR" ]; then
echo "Directory $APP_DIR already exists — aborting to avoid overwriting."
exit 1
fi
echo "=== 1) Creating Backstage app ==="
npx --ignore-existing @backstage/create-app@latest "$APP_DIR"
cd "$APP_DIR"
# ------------------------
# 2) Bump Backstage versions
# ------------------------
echo "=== 2) Bumping Backstage packages to release $BACKSTAGE_RELEASE ==="
yarn backstage-cli versions:bump --release "$BACKSTAGE_RELEASE"
# ------------------------
# 3) Install backend plugins
# ------------------------
echo "=== 3) Installing backend plugins ==="
yarn --cwd packages/backend add \
@backstage/plugin-catalog-backend \
@backstage/plugin-catalog-backend-module-scaffolder-entity-model \
@backstage/plugin-catalog-backend-module-unprocessed \
@backstage/plugin-catalog-backend-module-github \
@backstage/plugin-catalog-backend-module-gitea \
@backstage/plugin-scaffolder-backend \
@backstage/plugin-scaffolder-backend-module-github \
@backstage/plugin-scaffolder-backend-module-notifications \
@backstage/plugin-auth-backend \
@backstage/plugin-techdocs-backend \
@backstage/plugin-kubernetes-backend \
@backstage/plugin-devtools-backend \
@backstage/plugin-app-backend \
@backstage/plugin-proxy-backend \
@backstage/plugin-permission-backend \
@backstage/plugin-permission-backend-module-allow-all-policy \
@backstage/plugin-notifications-backend \
@backstage/plugin-events-backend \
@backstage/plugin-search-backend \
@backstage/plugin-search-backend-module-catalog \
@backstage/plugin-search-backend-module-techdocs
# ------------------------
# 4) Install frontend plugins
# ------------------------
echo "=== 4) Installing frontend plugins ==="
yarn --cwd packages/app add \
@backstage/plugin-catalog \
@backstage/plugin-catalog-graph \
@backstage/plugin-catalog-import \
@backstage/plugin-techdocs \
@backstage/plugin-techdocs-module-addons-contrib \
@backstage/plugin-scaffolder \
@backstage/plugin-user-settings \
@backstage/plugin-search \
@backstage/plugin-api-docs \
@backstage/plugin-org
# ------------------------
# 5) Patch backend index.ts with static imports
# ------------------------
echo "=== 5) Patching backend index.ts ==="
BACKEND_FILE="packages/backend/src/index.ts"
mkdir -p "$(dirname "$BACKEND_FILE")"
cat > "$BACKEND_FILE" <<'EOF'
import { createBackend } from '@backstage/backend-defaults';
import { createBackendFeatureLoader } from '@backstage/backend-plugin-api';
import appBackend from '@backstage/plugin-app-backend';
import catalogBackend from '@backstage/plugin-catalog-backend';
import catalogScaffolderEntityModel from '@backstage/plugin-catalog-backend-module-scaffolder-entity-model';
import catalogUnprocessed from '@backstage/plugin-catalog-backend-module-unprocessed';
import catalogGithub from '@backstage/plugin-catalog-backend-module-github';
import catalogGitea from '@backstage/plugin-catalog-backend-module-gitea';
import scaffolderBackend from '@backstage/plugin-scaffolder-backend';
import scaffolderGithub from '@backstage/plugin-scaffolder-backend-module-github';
import scaffolderNotifications from '@backstage/plugin-scaffolder-backend-module-notifications';
import authBackend from '@backstage/plugin-auth-backend';
import techdocsBackend from '@backstage/plugin-techdocs-backend';
import kubernetesBackend from '@backstage/plugin-kubernetes-backend';
import devtoolsBackend from '@backstage/plugin-devtools-backend';
import proxyBackend from '@backstage/plugin-proxy-backend';
import permissionBackend from '@backstage/plugin-permission-backend';
import allowAllPolicy from '@backstage/plugin-permission-backend-module-allow-all-policy';
import notificationsBackend from '@backstage/plugin-notifications-backend';
import eventsBackend from '@backstage/plugin-events-backend';
const backend = createBackend();
backend.add(appBackend);
backend.add(catalogBackend);
backend.add(catalogScaffolderEntityModel);
backend.add(catalogUnprocessed);
backend.add(catalogGithub);
backend.add(catalogGitea);
backend.add(scaffolderBackend);
backend.add(scaffolderGithub);
backend.add(scaffolderNotifications);
backend.add(authBackend);
backend.add(techdocsBackend);
backend.add(kubernetesBackend);
backend.add(devtoolsBackend);
backend.add(proxyBackend);
backend.add(permissionBackend);
backend.add(allowAllPolicy);
backend.add(notificationsBackend);
backend.add(eventsBackend);
const searchLoader = createBackendFeatureLoader({
*loader() {
yield import('@backstage/plugin-search-backend');
yield import('@backstage/plugin-search-backend-module-catalog');
yield import('@backstage/plugin-search-backend-module-techdocs');
},
});
backend.add(searchLoader);
backend.start();
EOF
echo "✓ Backend index.ts patched."
# ------------------------
# 6) Do NOT overwrite App.tsx
# ------------------------
echo "=== 6) Preserving existing App.tsx ==="
# ------------------------
# 7) Install workspace dependencies
# ------------------------
echo "=== 7) Installing workspace dependencies ==="
yarn install
# ------------------------
# 8) Build backend bundle
# ------------------------
echo "=== 8) Building backend bundle ==="
yarn workspace backend build
# ------------------------
# 9) Build Docker image
# ------------------------
echo "=== 9) Building backend Docker image ==="
yarn workspace backend build-image
echo "=== DONE ==="
echo "Backstage app created at: $APP_DIR"
echo "Docker image built successfully. Run with: docker run -p 7007:7007 <image_name>"

205
dev/backstage/setup3.sh Executable file
View File

@@ -0,0 +1,205 @@
#!/usr/bin/env bash
set -euo pipefail
# ------------------------
# Configuration
# ------------------------
APP_NAME="backstage"
APP_DIR="$PWD/$APP_NAME"
BACKSTAGE_RELEASE="1.42.1"
NODE_VERSION_MIN=18
echo
echo "=== Backstage automated setup script ==="
echo "App dir: $APP_DIR"
echo "Target Backstage release: $BACKSTAGE_RELEASE"
echo
# Quick environment checks
command -v node >/dev/null 2>&1 || echo "Warning: node not found (need >= ${NODE_VERSION_MIN})"
command -v yarn >/dev/null 2>&1 || echo "Warning: yarn not found"
# ------------------------
# 1) Create Backstage app
# ------------------------
if [ -d "$APP_DIR" ]; then
echo "Directory $APP_DIR already exists — aborting to avoid overwriting."
exit 1
fi
echo "=== 1) Creating Backstage app ==="
npx --ignore-existing @backstage/create-app@latest "$APP_DIR"
cd "$APP_DIR"
# ------------------------
# 2) Bump Backstage versions
# ------------------------
echo "=== 2) Bumping Backstage packages to release $BACKSTAGE_RELEASE ==="
yarn backstage-cli versions:bump --release "$BACKSTAGE_RELEASE"
# ------------------------
# 3) Install backend plugins
# ------------------------
echo "=== 3) Installing backend plugins ==="
yarn --cwd packages/backend add \
@backstage/plugin-catalog-backend \
@backstage/plugin-catalog-backend-module-scaffolder-entity-model \
@backstage/plugin-catalog-backend-module-unprocessed \
@backstage/plugin-catalog-backend-module-github \
@backstage/plugin-catalog-backend-module-gitea \
@backstage/plugin-scaffolder-backend \
@backstage/plugin-scaffolder-backend-module-github \
@backstage/plugin-scaffolder-backend-module-notifications \
@backstage/plugin-auth-backend \
@backstage/plugin-auth-backend-module-guest-provider \
@backstage/plugin-techdocs-backend \
@backstage/plugin-kubernetes-backend \
@backstage/plugin-devtools-backend \
@backstage/plugin-app-backend \
@backstage/plugin-proxy-backend \
@backstage/plugin-permission-backend \
@backstage/plugin-permission-backend-module-allow-all-policy \
@backstage/plugin-notifications-backend \
@backstage/plugin-events-backend \
@backstage/plugin-search-backend \
@backstage/plugin-search-backend-module-catalog \
@backstage/plugin-search-backend-module-techdocs
# ------------------------
# 4) Install frontend plugins
# ------------------------
echo "=== 4) Installing frontend plugins ==="
yarn --cwd packages/app add \
@backstage/plugin-catalog \
@backstage/plugin-catalog-graph \
@backstage/plugin-catalog-import \
@backstage/plugin-techdocs \
@backstage/plugin-techdocs-module-addons-contrib \
@backstage/plugin-scaffolder \
@backstage/plugin-user-settings \
@backstage/plugin-search \
@backstage/plugin-api-docs \
@backstage/plugin-org
# ------------------------
# 5) Patch backend index.ts with static imports
# ------------------------
echo "=== 5) Patching backend index.ts ==="
BACKEND_FILE="packages/backend/src/index.ts"
mkdir -p "$(dirname "$BACKEND_FILE")"
cat > "$BACKEND_FILE" <<'EOF'
import { createBackend } from '@backstage/backend-defaults';
import { createBackendFeatureLoader } from '@backstage/backend-plugin-api';
import appBackend from '@backstage/plugin-app-backend';
import catalogBackend from '@backstage/plugin-catalog-backend';
import catalogScaffolderEntityModel from '@backstage/plugin-catalog-backend-module-scaffolder-entity-model';
import catalogUnprocessed from '@backstage/plugin-catalog-backend-module-unprocessed';
import catalogGithub from '@backstage/plugin-catalog-backend-module-github';
import catalogGitea from '@backstage/plugin-catalog-backend-module-gitea';
import scaffolderBackend from '@backstage/plugin-scaffolder-backend';
import scaffolderGithub from '@backstage/plugin-scaffolder-backend-module-github';
import scaffolderNotifications from '@backstage/plugin-scaffolder-backend-module-notifications';
import authBackend from '@backstage/plugin-auth-backend';
import guestProvider from '@backstage/plugin-auth-backend-module-guest-provider';
import techdocsBackend from '@backstage/plugin-techdocs-backend';
import kubernetesBackend from '@backstage/plugin-kubernetes-backend';
import devtoolsBackend from '@backstage/plugin-devtools-backend';
import proxyBackend from '@backstage/plugin-proxy-backend';
import permissionBackend from '@backstage/plugin-permission-backend';
import allowAllPolicy from '@backstage/plugin-permission-backend-module-allow-all-policy';
import notificationsBackend from '@backstage/plugin-notifications-backend';
import eventsBackend from '@backstage/plugin-events-backend';
const backend = createBackend();
backend.add(appBackend);
backend.add(catalogBackend);
backend.add(catalogScaffolderEntityModel);
backend.add(catalogUnprocessed);
backend.add(catalogGithub);
backend.add(catalogGitea);
backend.add(scaffolderBackend);
backend.add(scaffolderGithub);
backend.add(scaffolderNotifications);
backend.add(authBackend);
backend.add(guestProvider);
backend.add(techdocsBackend);
backend.add(kubernetesBackend);
backend.add(devtoolsBackend);
backend.add(proxyBackend);
backend.add(permissionBackend);
backend.add(allowAllPolicy);
backend.add(notificationsBackend);
backend.add(eventsBackend);
const searchLoader = createBackendFeatureLoader({
*loader() {
yield import('@backstage/plugin-search-backend');
yield import('@backstage/plugin-search-backend-module-catalog');
yield import('@backstage/plugin-search-backend-module-techdocs');
},
});
backend.add(searchLoader);
backend.start();
EOF
echo "✓ Backend index.ts patched."
# ------------------------
# 6) Do NOT overwrite App.tsx
# ------------------------
echo "=== 6) Preserving existing App.tsx ==="
# ------------------------
# 7) Install workspace dependencies
# ------------------------
echo "=== 7) Installing workspace dependencies ==="
yarn install
# ------------------------
# 8) Build backend bundle
# ------------------------
echo "=== 8) Building backend bundle ==="
yarn workspace backend build
# ------------------------
# 9) Patch backend Dockerfile for TechDocs
# ------------------------
DOCKERFILE="packages/backend/Dockerfile"
echo "=== Patching backend Dockerfile for TechDocs mkdocs ==="
# Insert mkdocs virtualenv only if not already patched
if ! grep -q "VIRTUAL_ENV=/opt/venv" "$DOCKERFILE"; then
cat >> "$DOCKERFILE" <<'EOF'
# --- TechDocs MkDocs virtualenv ---
USER root
RUN apt-get update && apt-get install -y python3 python3-pip python3-venv git build-essential && rm -rf /var/lib/apt/lists/*
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN pip3 install mkdocs-techdocs-core mkdocs-awesome-pages-plugin
USER node
EOF
echo "✓ Dockerfile patched with mkdocs virtualenv"
else
echo "✓ Dockerfile already patched"
fi
# ------------------------
# 10) Build Docker image
# ------------------------
echo "=== 10) Building backend Docker image ==="
yarn workspace backend build-image
echo "=== DONE ==="
echo "Backstage app created at: $APP_DIR"
echo "Docker image built successfully. Run with: docker run -p 7007:7007 <image_name>"

2
dev/camunda/README.md Executable file
View File

@@ -0,0 +1,2 @@
userid: demo
password: demo

52
dev/camunda/camunda.yaml Executable file
View File

@@ -0,0 +1,52 @@
apiVersion: v1
kind: Service
metadata:
name: camunda
namespace: camunda
spec:
selector:
app: camunda
ports:
- name: http
port: 8080
targetPort: 8080
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: camunda
namespace: camunda
spec:
replicas: 1
selector:
matchLabels:
app: camunda
template:
metadata:
labels:
app: camunda
spec:
containers:
- name: camunda
image: allardkrings/camunda7-arm64v8
ports:
- containerPort: 8080
imagePullPolicy: IfNotPresent
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: camunda-tls
namespace: camunda
spec:
entryPoints:
- websecure
routes:
- match: Host(`camunda-prod.allarddcs.nl`)
kind: Rule
services:
- name: camunda
port: 8080
tls:
certResolver: letsencrypt

View File

@@ -0,0 +1,17 @@
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: dev-camunda
title: Camunda (dev)
annotations:
backstage.io/kubernetes-id: camunda
links:
- url: https://github.com/AllardKrings/kubernetes/dev/camunda
title: camunda-configuration
docs:
- url: ./README.md
spec:
type: service
lifecycle: production
owner: allarddcs
subcomponentOf: component:default/DEV-cluster

60
dev/catalog-info.yaml Normal file
View File

@@ -0,0 +1,60 @@
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: DEV-cluster
namespace: default
description: deployments DEV-cluster
annotations:
backstage.io/techdocs-ref: dir:.
links:
- url: https://github.com/AllardKrings/kubernetes/dev/
title: AllardDCS DEV-cluster
docs:
- url: ./README.md
spec:
type: service
lifecycle: production
owner: group:default/allarddcs
children:
- component:default/dev-camunda
- component:default/dev-redis
- component:default/dev-postgres16
- component:default/dev-argocd
- component:default/dev-backstage
- component:default/dev-camunda
- component:default/dev-cockroachdb
- component:default/dev-cosign
- component:default/dev-crate
- component:default/dev-defectdojo
- component:default/dev-deptrack
- component:default/dev-dnsutils
- component:default/dev-docs
- component:default/dev-drupal
- component:default/dev-elasticsearch-kibana
- component:default/dev-gitea
- component:default/dev-grafana
- component:default/dev-harbor
- component:default/dev-hercules
- component:default/dev-itop
- component:default/dev-kafka
- component:default/dev-kubernetes
- component:default/dev-mariadb
- component:default/dev-nexus
- component:default/dev-nginx
- component:default/dev-olproperties
- component:default/dev-pgadmin
- component:default/dev-phpmyadmin
- component:default/dev-portainer
- component:default/dev-postgres13
- component:default/dev-postgres14
- component:default/dev-postgres15
- component:default/dev-postgres16
- component:default/dev-prometheus
- component:default/dev-rabbitmq
- component:default/dev-redis
- component:default/dev-redmine
- component:default/dev-sonarqube
- component:default/dev-tekton
- component:default/dev-traefik
- component:default/dev-trivy
- component:default/dev-zabbix

26
dev/cockroachdb/README.md Normal file
View File

@@ -0,0 +1,26 @@
#Installation:
#apply the CRD:
kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v2.14.0/install/crds.yaml
#install cockroachdb:
kubectl apply -f cockroachdb.yaml
#Initialiseren cluster:
kubectl exec -it cockroachdb-0 \
-- /cockroach/cockroach init \
--certs-dir=/cockroach/cockroach-certs
#Inloggen client:
kubectl exec -it cockroachdb-client-secure \
-- ./cockroach sql \
--certs-dir=/cockroach-certs \
--host=cockroachdb-public
#Gebruiker aanmaken:
CREATE USER roach WITH PASSWORD 'Cockroach01@';

View File

@@ -0,0 +1,10 @@
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: dev-cockroachdb
title: Cockroachdb (dev)
spec:
type: service
lifecycle: production
owner: group:default/allarddcs
subcomponentOf: component:default/DEV-cluster

View File

@@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDJTCCAg2gAwIBAgIQa/0mCEqslZ2d107ceEr9ATANBgkqhkiG9w0BAQsFADAr
MRIwEAYDVQQKEwlDb2Nrcm9hY2gxFTATBgNVBAMTDENvY2tyb2FjaCBDQTAeFw0y
NTAxMjUyMDIzNDRaFw0zNTAyMDMyMDIzNDRaMCsxEjAQBgNVBAoTCUNvY2tyb2Fj
aDEVMBMGA1UEAxMMQ29ja3JvYWNoIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAvBJOTewyeYeWUncc7wx27bRCaDH7YawGyaltYypUzo93li+8K5Uw
VSYfy3mxNp47IQXebDPCQITct5pGq/EBTrWGJ/MLf8ZcCfPvvzylsqsesFFfS5y0
sYof+JzyowDOJflWsQnJLIK5kD32fvupvc0dKY8q/4WN/Ra1kiUm6ZcFYWVKJx2s
2ZVWcDP5xh+obCgP3F4cTsLjo1mkoRPMSLw5w9M5x3AiDgi6zwkcw9aUVq0lBciA
lI4cAHC4Awc1AP3OazYV/E+cC6dtzS+55KRGQIYOp/pkgBKsTAd2ahuZTh8ZWXyS
p30X0luRUO9wBksGEt5ixx5QdtOd0jQWLQIDAQABo0UwQzAOBgNVHQ8BAf8EBAMC
AuQwEgYDVR0TAQH/BAgwBgEB/wIBATAdBgNVHQ4EFgQU5Olr9c4vu7OLVJrlGOtF
rdh5+qQwDQYJKoZIhvcNAQELBQADggEBALTZARd4BA0ke5O4a9G+1Om1P4L16fk9
R2uICKW1MEGg/1zDXZS/6dX+2xJrLLp3xhFcpFge78zi0MVyBfnrl0j+Uk+eSPar
iubS9S/qN7LkMKcZM8l2hZnPQ0bu6WbaKcH9Bu2KNcWdowsCLb7vgIEXkNPlxoKM
Q+lOZHorpLZgQph1Se7nnjhuXuqxzhxv5NlPVVy/ZiuoJ1FUn5nbS3vIvpGGiGsO
2bGltS2ADsfBNmCsRfgj1HutHERpUG+cvMsa9Wf9o3wuohUOzguPxxaL/Hpbxwp+
hnL13ksKb/bs45VHtYRQuZaUPoqTWvLRMIdMMxaLNMzE6Xyzc8h/dbA=
-----END CERTIFICATE-----

View File

@@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDIDCCAgigAwIBAgIQJwncfRDbHgMyuJKxK0dKCDANBgkqhkiG9w0BAQsFADAr
MRIwEAYDVQQKEwlDb2Nrcm9hY2gxFTATBgNVBAMTDENvY2tyb2FjaCBDQTAeFw0y
NTAxMjUyMDIzNTdaFw0zMDAxMzAyMDIzNTdaMCMxEjAQBgNVBAoTCUNvY2tyb2Fj
aDENMAsGA1UEAxMEcm9vdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
ALzsZkbDiGNFg+jC16+eLzL5GvygvInkFljBgxJcrajRueq3KKfWlg1WTw6SqoiU
+c1uBiK8wiz17zkyo6K1lOabIlRutyAPZNnx7F+iBvhbMw8uzrlvWZKNCTWAJi4M
tLNDesSqmcCdEl+7ycJkGEmXyyDjGz+UtI6Bq5ax/MN9lc8CoKKAc6KzqiiYf0MR
6A2f5wwm8th8kT89HIt541LyElUr0JjttYOhrR0O82gF11Uf6OTYCxiySaHXTXpW
yYXXs6YsFaqm+Y3UZfnIk3jkwMPTYuQ3HoVe66YPB87JbPfMmiO4+NBGgqpSq2d9
n+l87zGJumwUaFQcq2s/1yUCAwEAAaNIMEYwDgYDVR0PAQH/BAQDAgWgMBMGA1Ud
JQQMMAoGCCsGAQUFBwMCMB8GA1UdIwQYMBaAFOTpa/XOL7uzi1Sa5RjrRa3Yefqk
MA0GCSqGSIb3DQEBCwUAA4IBAQAyygcCWS9hC2/HI59i5IwirXxO6NXUJLQIrooz
z187fhAdfVGioAT6K1cU+NrrJgoFc9Znle4USjAgiCttfOu8ZXXySpm8kpwzlPCa
m7tg76cpOHB9Gw1vt4DQdgjTjBDiIMjQIa8BRdIgvjC0VodFMe950cBuYpTrX27W
KdFpsqWfD423uWPyVMxO/8k1E0epuHnLxqNEX55+yPM24PxiHVxsm6YSeViIAxj0
NXNXYSAoHQKob+8NysWT4QhrezdF8Cj6zbvlIrpJdmRiwcvbvBp4bnj6wg5OYAPM
pNqjII1A52ryOn5jVEfZvBb6s18ZIm9d/xGPugVsbJhBJy6S
-----END CERTIFICATE-----

View File

@@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAvOxmRsOIY0WD6MLXr54vMvka/KC8ieQWWMGDElytqNG56rco
p9aWDVZPDpKqiJT5zW4GIrzCLPXvOTKjorWU5psiVG63IA9k2fHsX6IG+FszDy7O
uW9Zko0JNYAmLgy0s0N6xKqZwJ0SX7vJwmQYSZfLIOMbP5S0joGrlrH8w32VzwKg
ooBzorOqKJh/QxHoDZ/nDCby2HyRPz0ci3njUvISVSvQmO21g6GtHQ7zaAXXVR/o
5NgLGLJJoddNelbJhdezpiwVqqb5jdRl+ciTeOTAw9Ni5DcehV7rpg8Hzsls98ya
I7j40EaCqlKrZ32f6XzvMYm6bBRoVByraz/XJQIDAQABAoIBAAVHOYhKmDnlzEyp
fOssKTdsXEOonfvgQnuSVH4j1ro7uc0D9v/Rb/nJaoYGtPsB5oTFySgZS/eDm35m
msnF9vYGaYwgV79ujqvEJY16cmVn7uJCtYXaxY7hn9s9zFNHCZlkjj6GYatO+B9y
mK10rHUJ56PwlGdPWUgN+WRJbr1rbXJ0XhaNlR7d39XxrxFFI4MOvw2DNOvAOG6g
foIpA4ZeLhcGYIjsZxqrOZqVh1br4w5rWEvGqONi6LCrvwtMuNLAWExASkLJKIzw
vQ9jHpxYNqak0PHpsrHtUx50WsMt0ea1u/ioMKPNXs/Lkj18eGYpVI+S1wxDgKV+
m6K6uZUCgYEA9UKYCV1KiKAINTtwbTKHSa/vn/U6JKOLQUvPD2qpbVRdgS2R1mQS
soqeDW1d+Y4tRk/tnlmpolkuuNDxulr2CTm6wbgeU6TnF7pq7ClIZK3hv2VGTT3B
uXxx+cQ+zjqygAidopjLMUH/3aO7Ldw6gcuCLrjN1xEVJiD4IGTwxtsCgYEAxTJD
Fl9m5g3bCQPfpSfclI0weNPHIoVQ63IcqRHH+e0BR03YZWbq8lMl+t81q6G/rsIH
jD1Pl5RW9EhgguXOoMXeKVpT34M+gcJ0PdEI6+WZ3ZjJ0kwwPcypsA93aZmZx883
iksC2ZfIKqpCwguDKyvb5EcLNzrDSnMAl7NZOf8CgYEAoVqKg76ohnIidEMCmBSi
BMyGrYm8Eta1iuPA+beGd7MFQTMluxJjaqrfiJ3nMYNkLdnvzjnW7EQYBOcR4TRu
oWslfsUOzqCymF3AclZGllX/KtgKBE8Y4FsK8PM3Dp53SNxiONKk+2ccWkiZoHY+
1513rB1Q7qkCbO9LzqQZ8/kCgYEAgFAYPzKMrh1N7SvMFpc9fJvycmy7IsdExC9Y
XtrnGMUTE+afbDvvnQZlrDwZnDh/laNDbglnBObNPd7qjcIjFZIq4RWZhdLMlXqG
UML33ydjW0HT8TcKHOxTbfBibyA3ZEB9j0sH67ZL1Rc8oS8Ehs7fIkboEWP3NzZl
qFBXOtkCgYEAz9L2J9rpXQgwbPCOCjuPvm+zvAnGXdNgrUsVd8Tk1wczc5FyaBxw
DMgHo1BxELPETb0hNxEdQ0DdR83MXp0PZA1IG1XKcAH8CXloELwN3jpM+/6PHQRz
vdvkLPv3wM1Qdj4g6FlnPvlJHAlPytnDrUbSWxA6xMVYQJKw8na2Cm8=
-----END RSA PRIVATE KEY-----

View File

@@ -0,0 +1,24 @@
-----BEGIN CERTIFICATE-----
MIID+jCCAuKgAwIBAgIQI/uQsaTfs97kfvVSTD400zANBgkqhkiG9w0BAQsFADAr
MRIwEAYDVQQKEwlDb2Nrcm9hY2gxFTATBgNVBAMTDENvY2tyb2FjaCBDQTAeFw0y
NTAxMjUyMDI0MTBaFw0zMDAxMzAyMDI0MTBaMCMxEjAQBgNVBAoTCUNvY2tyb2Fj
aDENMAsGA1UEAxMEbm9kZTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
AJ8eplN7Xp2XZYJqlp+BvOh6sN0CqVo7tCbuXSt1ZpeC0EzRTU4u1j7cGhExzYSj
VUGootjPZIjB6OQu6JHzheubWUzYMXBC72PjKYbbwoE69b98GsIP9aJ3++0j5dln
TUP/SgiVf90w3ltb6MdlWX9VMpqsmCj3b1CqNfGT+Xc/pbSCN1oT7m5XUsaGkaux
BKp9QeI6Zii8q+qyt/U1+qFCE1AVMoJe/KRM3O3j+3G+90t/IKGnJj3wtSs8+BzC
FV2ZBPJcLsmL0are9yOVU+xhc8drLdefxZQiNL8nb3MgqQ/uVSfDhraMlna+mpxo
lLDm1Zm4AKlztwwxvIV+dT8CAwEAAaOCASAwggEcMA4GA1UdDwEB/wQEAwIFoDAd
BgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHwYDVR0jBBgwFoAU5Olr9c4v
u7OLVJrlGOtFrdh5+qQwgckGA1UdEQSBwTCBvoIJbG9jYWxob3N0ghJjb2Nrcm9h
Y2hkYi1wdWJsaWOCGmNvY2tyb2FjaGRiLXB1YmxpYy5kZWZhdWx0gixjb2Nrcm9h
Y2hkYi1wdWJsaWMuZGVmYXVsdC5zdmMuY2x1c3Rlci5sb2NhbIINKi5jb2Nrcm9h
Y2hkYoIVKi5jb2Nrcm9hY2hkYi5kZWZhdWx0gicqLmNvY2tyb2FjaGRiLmRlZmF1
bHQuc3ZjLmNsdXN0ZXIubG9jYWyHBH8AAAEwDQYJKoZIhvcNAQELBQADggEBAIth
4wIOZDDcuNDtsy3dxB2q/6miFaO0p2/iUyMci3b1nwlLTliKzWGgOCwNGGR4UXOM
zVQ1bu8I2w4zY5xF047xQDQR+ek4HyOayxLlua1fVCVq4jxv23vgJA4Gv0IhUbay
TfjnDDFhijy9URzBoVAwXAx2hGu1PlFmZ1bHjre13s1mTohO3nMTA+GsMGkLk8FB
M5wWDP8UKC9zmUXPSFLEscLWzjJ015Y/tqZUMFWB4bFsGKAxdkBR2PTWbnDETfrJ
7HymCOLBFinbMs8m+NPz1j+B8MGlwi0Eu5SWxiyWkt5FtczBdMcgnuVhZBWqqxko
E13Q6CHbMt+P3Ky3FMQ=
-----END CERTIFICATE-----

View File

@@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAnx6mU3tenZdlgmqWn4G86Hqw3QKpWju0Ju5dK3Vml4LQTNFN
Ti7WPtwaETHNhKNVQaii2M9kiMHo5C7okfOF65tZTNgxcELvY+MphtvCgTr1v3wa
wg/1onf77SPl2WdNQ/9KCJV/3TDeW1vox2VZf1UymqyYKPdvUKo18ZP5dz+ltII3
WhPubldSxoaRq7EEqn1B4jpmKLyr6rK39TX6oUITUBUygl78pEzc7eP7cb73S38g
oacmPfC1Kzz4HMIVXZkE8lwuyYvRqt73I5VT7GFzx2st15/FlCI0vydvcyCpD+5V
J8OGtoyWdr6anGiUsObVmbgAqXO3DDG8hX51PwIDAQABAoIBAFvoOi3yDl58Ohew
NTwAlfq6Ezo09Vi3L4FlIM+fShitaF9WbY6BIyK/wxa3a3v3U6FPJHCSqgEL79cM
+SyEOpAx9Myb+0Jahyds6GmKubgnNBbcOiBpU3n6T7tThsmiD1D9PefjYi2CsoyW
c8foVF9l+Iq6slDHSraO+gWFcQxc/9CizRsInGqHA64anN6XvBZoVBLlu2Fowg4G
EducEOiGCekYLiOUDcLBegv57STIA/lTQ8pqFk7HcFYgg4NQhMFoS1E79zdlkZfq
j7X/DHMbt8zvRZIlWp1PrDYMysYVQVCT0PbaSd8+x9bUbDKkoMkgSj/NHsQXYn4a
muEhj+ECgYEAx8NZxZ9JU4NL5rNN2crfs/QPwxCgKp+BI41px9hqLOWKqDfMB7fI
EjOlLJveZ07sFF2Yf2gMkzCwdrmHc2g0Rj0Csqzss6Si3ppvD6EIwREnmziiJplR
mq6dQzgd5u1p9YcbIZhjzKFvRWy9JR4Kl/0A+h0zN8QupvxelRBslZkCgYEAy+ow
J9cTUqEeBL69BQU2CUTnc/MKCKGeTPRWqtKfODd7uglTaUgQ0DxDBoJxnS4ORcCN
9isT/UNJov8ufoZ1U8Kk+nBX++K5QFb46/TEomxeW+oabBg1+oLEPyqmd0H2p5er
JDsgsURUAcgKEV6ac11rzl2rwwfhgo9WVTB2+JcCgYEAwEeu32QFBpe4tWUdqGd4
kBR6H36fTKeffAMgMLaE7JY9stGSWFN0BuEjOh8GIlZ7MtcsdGZIxFz3XjASyukg
eAM915JPfFMaWj44bMjKTlwezW/j1Fd7jvJIeW1IiwE3HphfayTt2wgAvMh//3w9
IjLrf9QfeqwhY6ZDvCPFAPECgYBHUHfW9xkC5OYisrJYdyIWy8pGetEfg6ZhM3K7
+z1D4+OZhHlvcIywxuKJ/ETPu7OyIU2Esjwjbszp/GS+SzftOz2HeJLMvNYc8k3L
96ZtR4kYjB8BftYh7mnDzZ66Ro+EvT5VRXiBhmv604Lx4CwT/LAfVBMl+jOb/ZUr
5e81sQKBgEmLXN7NBs/3TXukSBwxvcixZWmgFVJIfrUhXN34p1T0BjaFKaTKREDZ
ulpnWImY9p/Q5ey1dpNlC3b9c/ZNseBXwOfmSP6TkaWpWBWNgwVOWMa6r6gPDVgZ
TlEn2zeJH+4YjrMZga0Aoeg7HcJondSV0s8jQqBhRNVZFSMjF+tA
-----END RSA PRIVATE KEY-----

View File

@@ -0,0 +1,3 @@
sudo mkdir -p /usr/local/lib/cockroach
sudo cp -i lib/libgeos.so /usr/local/lib/cockroach/
sudo cp -i lib/libgeos_c.so /usr/local/lib/cockroach/

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,289 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: cockroachdb
# namespace: cockroachdb
labels:
app: cockroachdb
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cockroachdb
# namespace: cockroachdb
labels:
app: cockroachdb
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cockroachdb
# namespace: cockroachdb
labels:
app: cockroachdb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cockroachdb
subjects:
- kind: ServiceAccount
name: cockroachdb
# namespace: default
---
apiVersion: v1
kind: Service
metadata:
# This service is meant to be used by clients of the database. It exposes a ClusterIP that will
# automatically load balance connections to the different database pods.
name: cockroachdb-public
# namespace: cockroachdb
labels:
app: cockroachdb
spec:
ports:
# The main port, served by gRPC, serves Postgres-flavor SQL, internode
# traffic and the cli.
- port: 26257
targetPort: 26257
name: grpc
# The secondary port serves the UI as well as health and debug endpoints.
- port: 8080
targetPort: 8080
name: http
selector:
app: cockroachdb
---
apiVersion: v1
kind: Service
metadata:
# This service only exists to create DNS entries for each pod in the stateful
# set such that they can resolve each other's IP addresses. It does not
# create a load-balanced ClusterIP and should not be used directly by clients
# in most circumstances.
name: cockroachdb
# namespace: cockroachdb
labels:
app: cockroachdb
annotations:
# Use this annotation in addition to the actual publishNotReadyAddresses
# field below because the annotation will stop being respected soon but the
# field is broken in some versions of Kubernetes:
# https://github.com/kubernetes/kubernetes/issues/58662
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
# Enable automatic monitoring of all instances when Prometheus is running in the cluster.
prometheus.io/scrape: "true"
prometheus.io/path: "_status/vars"
prometheus.io/port: "8080"
spec:
ports:
- port: 26257
targetPort: 26257
name: grpc
- port: 8080
targetPort: 8080
name: http
# We want all pods in the StatefulSet to have their addresses published for
# the sake of the other CockroachDB pods even before they're ready, since they
# have to be able to talk to each other in order to become ready.
publishNotReadyAddresses: true
clusterIP: None
selector:
app: cockroachdb
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: cockroachdb-budget
# namespace: cockroachdb
labels:
app: cockroachdb
spec:
selector:
matchLabels:
app: cockroachdb
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cockroachdb
# namespace: cockroachdb
spec:
serviceName: "cockroachdb"
replicas: 3
selector:
matchLabels:
app: cockroachdb
template:
metadata:
labels:
app: cockroachdb
spec:
serviceAccountName: cockroachdb
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- cockroachdb
topologyKey: kubernetes.io/hostname
containers:
- name: cockroachdb
image: cockroachdb/cockroach:v24.1.2
imagePullPolicy: IfNotPresent
args: ["-- insecure"]
# TODO: Change these to appropriate values for the hardware that you're running. You can see
# the resources that can be allocated on each of your Kubernetes nodes by running:
# kubectl describe nodes
# Note that requests and limits should have identical values.
resources:
requests:
cpu: "2"
memory: "2Gi"
limits:
cpu: "2"
memory: "2Gi"
ports:
- containerPort: 26257
name: grpc
- containerPort: 8080
name: http
# We recommend that you do not configure a liveness probe on a production environment, as this can impact the availability of production databases.
# livenessProbe:
# httpGet:
# path: "/health"
# port: http
# scheme: HTTPS
# initialDelaySeconds: 30
# periodSeconds: 5
readinessProbe:
httpGet:
path: "/health?ready=1"
port: http
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 2
volumeMounts:
- name: datadir
mountPath: /cockroach/cockroach-data
- name: certs
mountPath: /cockroach/cockroach-certs
env:
- name: COCKROACH_CHANNEL
value: kubernetes-secure
- name: GOMAXPROCS
valueFrom:
resourceFieldRef:
resource: limits.cpu
divisor: "1"
- name: MEMORY_LIMIT_MIB
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: "1Mi"
command:
- "/bin/bash"
- "-ecx"
# The use of qualified `hostname -f` is crucial:
# Other nodes aren't able to look up the unqualified hostname.
- exec
/cockroach/cockroach
start
--logtostderr
--certs-dir /cockroach/cockroach-certs
--advertise-host $(hostname -f)
--http-addr 0.0.0.0
--join cockroachdb-0.cockroachdb,cockroachdb-1.cockroachdb,cockroachdb-2.cockroachdb
--cache $(expr $MEMORY_LIMIT_MIB / 4)MiB
--max-sql-memory $(expr $MEMORY_LIMIT_MIB / 4)MiB
# No pre-stop hook is required, a SIGTERM plus some time is all that's
# needed for graceful shutdown of a node.
terminationGracePeriodSeconds: 60
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
- name: certs
secret:
secretName: cockroachdb.node
defaultMode: 256
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: cockroach-tls
# namespace: cockroachdb
spec:
entryPoints:
- websecure
routes:
- match: HostSNI(`cockroach-prod.allarddcs.nl`)
services:
- name: cockroachdb-public
port: 8080
tls:
passthrough: true
---
# Generated file, DO NOT EDIT. Source: cloud/kubernetes/templates/bring-your-own-certs/client.yaml
# This config file demonstrates how to connect to the CockroachDB StatefulSet
# defined in bring-your-own-certs-statefulset.yaml that uses certificates
# created outside of Kubernetes. See that file for why you may want to use it.
# You should be able to adapt the core ideas to deploy your own custom
# applications and connect them to the database similarly.
#
# The pod that this file defines will sleep in the cluster not using any
# resources. After creating the pod, you can use it to open up a SQL shell to
# the database by running:
#
# kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --url="postgres://root@cockroachdb-public:26257/?sslmode=verify-full&sslcert=/cockroach-certs/client.root.crt&sslkey=/cockroach-certs/client.root.key&sslrootcert=/cockroach-certs/ca.crt"
apiVersion: v1
kind: Pod
metadata:
name: cockroachdb-client-secure
# namespace: cockroachdb
labels:
app: cockroachdb-client
spec:
serviceAccountName: cockroachdb
containers:
- name: cockroachdb-client
image: cockroachdb/cockroach:v24.1.2
# Keep a pod open indefinitely so kubectl exec can be used to get a shell to it
# and run cockroach client commands, such as cockroach sql, cockroach node status, etc.
command:
- sleep
- "2147483648" # 2^31
volumeMounts:
- name: client-certs
mountPath: /cockroach-certs
volumes:
- name: client-certs
secret:
secretName: cockroachdb.client.root
defaultMode: 256

16
dev/cockroachdb/install.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/bin/bash
rm -rf certs
rm -rf my-safe-directory
mkdir certs
mkdir my-safe-directory
cockroach cert create-ca --certs-dir=certs --ca-key=my-safe-directory/ca.key
cockroach cert create-client root --certs-dir=certs --ca-key=my-safe-directory/ca.key
#microk8s kubectl create ns cockroachdb
microk8s kubectl create secret generic cockroachdb.client.root --from-file=certs
cockroach cert create-node --certs-dir=certs --ca-key=my-safe-directory/ca.key localhost 127.0.0.1 cockroachdb-public cockroachdb-public.default cockroachdb-public.default.svc.cluster.local *.cockroachdb *.cockroachdb.default *.cockroachdb.default.svc.cluster.local
microk8s kubectl create secret generic cockroachdb.node --from-file=certs
microk8s kubectl create -f cockroachdb.yaml
microk8s kubectl get pod
microk8s kubectl exec -it cockroachdb-0 \
-- /cockroach/cockroach init \
--certs-dir=/cockroach/cockroach-certs

View File

@@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAvBJOTewyeYeWUncc7wx27bRCaDH7YawGyaltYypUzo93li+8
K5UwVSYfy3mxNp47IQXebDPCQITct5pGq/EBTrWGJ/MLf8ZcCfPvvzylsqsesFFf
S5y0sYof+JzyowDOJflWsQnJLIK5kD32fvupvc0dKY8q/4WN/Ra1kiUm6ZcFYWVK
Jx2s2ZVWcDP5xh+obCgP3F4cTsLjo1mkoRPMSLw5w9M5x3AiDgi6zwkcw9aUVq0l
BciAlI4cAHC4Awc1AP3OazYV/E+cC6dtzS+55KRGQIYOp/pkgBKsTAd2ahuZTh8Z
WXySp30X0luRUO9wBksGEt5ixx5QdtOd0jQWLQIDAQABAoIBAQCwnCQqap7vnxLb
t/1UwojAKeGehSlCjFAHefI+CFeBbhpnz8XNy5iKrXV4F3wCBU8TcLZxN524Bsxa
Iicxee23YyFrTIJE6BowQoGmPSaBBM6Z1qA9mhfZDRN+3KvBxJTR9jaho8Xl5ZCq
UnWyw1Of6Aj1qPtA3sL6oyO47OiAu3Ph2+jlXBTlpmNQlz3BjansHpV0l9IsYY0H
dhAieMY4piYzB6LIFQUBH8T7gxnToPvgulSWaKV1mG7Xw/lSoj1YpDXXWYWMfiDB
Xl55Pyrp44J8+cdATGFIgk+ln5aeDQNtVV3wLIHsSrZaZ6ojFFpBY3qj4LvYmRjS
0Sj79ErFAoGBAN/riyjNfgSRs2wqsMPcVwetKHmP7we5wA8WAWMj1glDfjhNfHo1
J6gEYASc2ai44aK5P6XIGeAt1NmAAqaeJKKk1/fMUKbgCLLeG+Ds24Q9FTIigUpW
kMctLTHJ9mkr2xSNfBUrjwvsvnZKYox6tBcYPDsnpgj/lkEJ7S32S5MjAoGBANcD
/ElaTUHFOr/q6YALQUgw97xBSff1WLa5ESByUXrirpNyKchnU6hY1Ndo9snd4QZs
RZIsPEPBbR1hN2R/gTbUn2hVGPxLZ0wUs/IbsYPXAsunRD87g2gI0W++OR3sz5j4
p/6NodgsRcOmAXG1pZwJAFAJLTqUkTF0yXg8dS5vAoGACK6MRbe59BlmGIKLOfzY
Dv8iu5veC7GjBbK3uQ1RpihMw4gVlHNtJzGMO4GNWuJYNUPzeM0KW8vLHee9spId
H4U+rmfolJ/JFo5QDGeCl1z67meyFZzHnkFdKDoJaMh/hQt7TSLUOAUk2VdG/OVh
CCgzZaPC50RpofntjUOoaHsCgYBORvoq7kAgCKCZy/jUD8TldkZKd+5o4h4472kn
ydaWCT6LGU3S0qMnL6fVADaQSUGp5/LwA0CxXhLOVl0nLjApeQDLp+dfukfR79uO
8bwPhlBTOgLjjlQJpOQybSs4FMWDKEtopcFdBMklMCNodTvkcXZ2rNCVeg7d1Wmf
Z0s16wKBgA8KPg/7fEdmXItkbcVd2tyngCOo1NNXyGmZ7SnrkoXilyiKzZwmeUZl
PN27ciS/VpKTb278tNdQudmlBs28/McKddz9SnAKvTP/WbUXAh3gpeDTX9KVD7++
Z7wCBrQcb2z5WG2ojUwbYYZGjuouYJT2WGElDoOxRT4eCSbgj4kB
-----END RSA PRIVATE KEY-----

336
dev/cockroachdb/pvc.yaml Executable file
View File

@@ -0,0 +1,336 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: cockroachdb
namespace: cockroachdb
labels:
app: cockroachdb
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cockroachdb
namespace: cockroachdb
labels:
app: cockroachdb
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cockroachdb
namespace: cockroachdb
labels:
app: cockroachdb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cockroachdb
subjects:
- kind: ServiceAccount
name: cockroachdb
namespace: default
---
apiVersion: v1
kind: Service
metadata:
# This service is meant to be used by clients of the database. It exposes a ClusterIP that will
# automatically load balance connections to the different database pods.
name: cockroachdb-public
namespace: cockroachdb
labels:
app: cockroachdb
spec:
ports:
# The main port, served by gRPC, serves Postgres-flavor SQL, internode
# traffic and the cli.
- port: 26257
targetPort: 26257
name: grpc
# The secondary port serves the UI as well as health and debug endpoints.
- port: 8080
targetPort: 8080
name: http
selector:
app: cockroachdb
---
apiVersion: v1
kind: Service
metadata:
# This service only exists to create DNS entries for each pod in the stateful
# set such that they can resolve each other's IP addresses. It does not
# create a load-balanced ClusterIP and should not be used directly by clients
# in most circumstances.
name: cockroachdb
namespace: cockroachdb
labels:
app: cockroachdb
annotations:
# Use this annotation in addition to the actual publishNotReadyAddresses
# field below because the annotation will stop being respected soon but the
# field is broken in some versions of Kubernetes:
# https://github.com/kubernetes/kubernetes/issues/58662
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
# Enable automatic monitoring of all instances when Prometheus is running in the cluster.
prometheus.io/scrape: "true"
prometheus.io/path: "_status/vars"
prometheus.io/port: "8080"
spec:
ports:
- port: 26257
targetPort: 26257
name: grpc
- port: 8080
targetPort: 8080
name: http
# We want all pods in the StatefulSet to have their addresses published for
# the sake of the other CockroachDB pods even before they're ready, since they
# have to be able to talk to each other in order to become ready.
publishNotReadyAddresses: true
clusterIP: None
selector:
app: cockroachdb
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: cockroachdb-budget
namespace: cockroachdb
labels:
app: cockroachdb
spec:
selector:
matchLabels:
app: cockroachdb
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cockroachdb
namespace: cockroachdb
spec:
serviceName: "cockroachdb"
replicas: 3
selector:
matchLabels:
app: cockroachdb
template:
metadata:
labels:
app: cockroachdb
spec:
serviceAccountName: cockroachdb
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- cockroachdb
topologyKey: kubernetes.io/hostname
containers:
- name: cockroachdb
image: cockroachdb/cockroach:v24.1.2
imagePullPolicy: IfNotPresent
# TODO: Change these to appropriate values for the hardware that you're running. You can see
# the resources that can be allocated on each of your Kubernetes nodes by running:
# kubectl describe nodes
# Note that requests and limits should have identical values.
resources:
requests:
cpu: "2"
memory: "2Gi"
limits:
cpu: "2"
memory: "2Gi"
ports:
- containerPort: 26257
name: grpc
- containerPort: 8080
name: http
# We recommend that you do not configure a liveness probe on a production environment, as this can impact the availability of production databases.
# livenessProbe:
# httpGet:
# path: "/health"
# port: http
# scheme: HTTPS
# initialDelaySeconds: 30
# periodSeconds: 5
readinessProbe:
httpGet:
path: "/health?ready=1"
port: http
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 2
volumeMounts:
- name: datadir
mountPath: /cockroach/cockroach-data
- name: certs
mountPath: /cockroach/cockroach-certs
env:
- name: COCKROACH_CHANNEL
value: kubernetes-secure
- name: GOMAXPROCS
valueFrom:
resourceFieldRef:
resource: limits.cpu
divisor: "1"
- name: MEMORY_LIMIT_MIB
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: "1Mi"
command:
- "/bin/bash"
- "-ecx"
# The use of qualified `hostname -f` is crucial:
# Other nodes aren't able to look up the unqualified hostname.
- exec
/cockroach/cockroach
start
--logtostderr
--certs-dir /cockroach/cockroach-certs
--advertise-host $(hostname -f)
--http-addr 0.0.0.0
--join cockroachdb-0.cockroachdb,cockroachdb-1.cockroachdb,cockroachdb-2.cockroachdb
--cache $(expr $MEMORY_LIMIT_MIB / 4)MiB
--max-sql-memory $(expr $MEMORY_LIMIT_MIB / 4)MiB
# No pre-stop hook is required, a SIGTERM plus some time is all that's
# needed for graceful shutdown of a node.
terminationGracePeriodSeconds: 60
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
- name: certs
secret:
secretName: cockroachdb.node
defaultMode: 256
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-cockroachdb-0
spec:
storageClassName: ""
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: 192.168.2.110
path: /mnt/nfs_share/cockroachdb/0
readOnly: false
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-cockroachdb-1
spec:
storageClassName: ""
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: 192.168.2.110
path: /mnt/nfs_share/cockroachdb/1
readOnly: false
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-cockroachdb-2
spec:
storageClassName: ""
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: 192.168.2.110
path: /mnt/nfs_share/cockroachdb/2
readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-cockroachdb-0
namespace: cockroachdb
spec:
storageClassName: nfs-client
volumeName: datadir-cockroachdb-0
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-cockroachdb-1
namespace: cockroachdb
spec:
storageClassName: nfs-client
volumeName: datadir-cockroachdb-1
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-cockroachdb-2
namespace: cockroachdb
spec:
storageClassName: nfs-client
volumeName: datadir-cockroachdb-2
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi

View File

@@ -0,0 +1,41 @@
#!/bin/bash
launcherJar=( server/plugins/org.jkiss.dbeaver.launcher*.jar )
echo "Starting CloudBeaver Enterprise Server"
[ ! -d "workspace/.metadata" ] && mkdir -p workspace/.metadata \
&& mkdir -p workspace/GlobalConfiguration/.dbeaver \
&& [ ! -f "workspace/GlobalConfiguration/.dbeaver/data-sources.json" ] \
&& cp conf/initial-data-sources.conf workspace/GlobalConfiguration/.dbeaver/data-sources.json
exec java ${JAVA_OPTS} \
-Dfile.encoding=UTF-8 \
--add-modules=ALL-SYSTEM \
--add-opens=java.base/java.io=ALL-UNNAMED \
--add-opens=java.base/java.lang=ALL-UNNAMED \
--add-opens=java.base/java.lang.reflect=ALL-UNNAMED \
--add-opens=java.base/java.net=ALL-UNNAMED \
--add-opens=java.base/java.nio=ALL-UNNAMED \
--add-opens=java.base/java.nio.charset=ALL-UNNAMED \
--add-opens=java.base/java.text=ALL-UNNAMED \
--add-opens=java.base/java.time=ALL-UNNAMED \
--add-opens=java.base/java.util=ALL-UNNAMED \
--add-opens=java.base/java.util.concurrent=ALL-UNNAMED \
--add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED \
--add-opens=java.base/jdk.internal.vm=ALL-UNNAMED \
--add-opens=java.base/jdk.internal.misc=ALL-UNNAMED \
--add-opens=java.base/sun.nio.ch=ALL-UNNAMED \
--add-opens=java.base/sun.security.ssl=ALL-UNNAMED \
--add-opens=java.base/sun.security.action=ALL-UNNAMED \
--add-opens=java.base/sun.security.util=ALL-UNNAMED \
--add-opens=java.security.jgss/sun.security.jgss=ALL-UNNAMED \
--add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED \
--add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED \
--add-opens=java.sql/java.sql=ALL-UNNAMED \
-jar ${launcherJar} \
-product io.cloudbeaver.product.ee.product \
-data ${workspacePath} \
-web-config conf/cloudbeaver.conf \
-nl en \
-registryMultiLanguage

10
dev/cockroachdb/uninstall.sh Executable file
View File

@@ -0,0 +1,10 @@
#!/bin/bash
microk8s kubectl delete -f cockroachdb.yaml
microk8s kubectl delete pvc datadir-cockroachdb-0 -n cockroachdb
microk8s kubectl delete pvc datadir-cockroachdb-1 -n cockroachdb
microk8s kubectl delete pvc datadir-cockroachdb-2 -n cockroachdb
microk8s kubectl delete secret cockroachdb.node -n cockroachdb
microk8s kubectl delete secret cockroachdb.client.root -n cockroachdb
microk8s kubectl delete ns cockroachdb
rm -rf certs
rm -rf my-safe-directory

40
dev/cosign/README.md Executable file
View File

@@ -0,0 +1,40 @@
#signing image with sbom
#generate sbom in spdx-format
syft quay.alldcs.nl/allard/olproperties:master -o spdx > olproperties.spdx
#attach the sbom to the image:
cosign attach sbom --sbom olproperties.spdx quay.alldcs.nl/allard/olproperties:master
WARNING: Attaching SBOMs this way does not sign them. If you want to sign them, use '
cosign attest --predicate olproperties.spdx --key <key path>' or 'cosign sign --key <key path> --attachment sbom <image uri>'
Uploading SBOM file for [quay.alldcs.nl/allard/olproperties:master] to [quay.alldcs.nl/allard/olproperties:sha256-4d79a08eb15ea8c9730e77fc54bea37299b4ed21d8b875d95fd54cd78e3556c9.sbom] with mediaType [text/spdx].
#singn the sbom:
cosing sign --key cosign.key quay.alldcs.nl/allard/olproperties:sha256-4d79a08eb15ea8c9730e77fc54bea37299b4ed21d8b875d95fd54cd78e3556c9.sbom
- output:
Enter password for private key:
WARNING: Image reference quay.alldcs.nl/allard/olproperties:sha256-4d79a08eb15ea8c9730e77fc54bea37299b4ed21d8b875d95fd54cd78e3556c9.sbom uses a tag, not a digest, to identify the image to sign.
This can lead you to sign a different image than the intended one. Please use a
digest (example.com/ubuntu@sha256:abc123...) rather than tag
(example.com/ubuntu:latest) for the input to cosign. The ability to refer to
images by tag will be removed in a future release.
The sigstore service, hosted by sigstore a Series of LF Projects, LLC, is provided pursuant to the Hosted Project Tools Terms of Use, available at https://lfprojects.org/policies/hosted-project-tools-terms-of-use/.
Note that if your submission includes personal data associated with this signed artifact, it will be part of an immutable record.
This may include the email address associated with the account with which you authenticate your contractual Agreement.
This information will be used for signing this artifact and will be stored in public transparency logs and cannot be removed later, and is subject to the Immutable Record notice at https://lfprojects.org/policies/hosted-project-tools-immutable-records/.
By typing 'y', you attest that (1) you are not submitting the personal data of any other person; and (2) you understand and agree to the statement and the Agreement terms at the URLs listed above.
Are you sure you would like to continue? [y/N] y
tlog entry created with index: 41682114
Pushing signature to: quay.alldcs.nl/allard/olproperties
#attest

View File

@@ -0,0 +1,10 @@
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: dev-cosign
title: Cosign (dev)
spec:
type: service
lifecycle: production
owner: allarddcs
subcomponentOf: component:default/DEV-cluster

11
dev/cosign/cosign.key Executable file
View File

@@ -0,0 +1,11 @@
-----BEGIN ENCRYPTED SIGSTORE PRIVATE KEY-----
eyJrZGYiOnsibmFtZSI6InNjcnlwdCIsInBhcmFtcyI6eyJOIjozMjc2OCwiciI6
OCwicCI6MX0sInNhbHQiOiJxL1Fzdkk2di9JQlFjN096Z1N2aFhtNllYbGpHemVv
OFhDS2lRUE1jK0RvPSJ9LCJjaXBoZXIiOnsibmFtZSI6Im5hY2wvc2VjcmV0Ym94
Iiwibm9uY2UiOiJ1T2h2c1AyMkh1d2M5RGF3OTZRNkVPcFNTTHhmbG5BKyJ9LCJj
aXBoZXJ0ZXh0IjoicHcxdm5BSENQUmgrZmMrM0t6UjVQTzdUU1hjcGRsMkEvdmhW
T3JHS2IzRWxtWGlNS2l3Wlo5M2pFT1MvdjZic3hjWXlOL3NKcmY0Ulc0TVQreDNw
SXJWd1duTlJCUWhmZ0VLb0xLZXhKNktOcnhTa1R0OE8zT25nZE1XNlBzSVZueldl
dTdZUWQrRW9KQnRxalVqb1dXYTBtTjcyNVZKVTFUNkNWNlh1K1UxVHNtYndKOWtB
TUpYVkttNmJyQys4MFJDL3dCS0x2dnZmTXc9PSJ9
-----END ENCRYPTED SIGSTORE PRIVATE KEY-----

4
dev/cosign/cosign.pub Executable file
View File

@@ -0,0 +1,4 @@
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEhvRXr/p/gE2ZVuf/aq+RktGqLWyR
fVHwC7ROAnfKL5zcsO3Deoao5nBXESQ9/6P/YB9Zjrw82ST2N4+e6bzFkA==
-----END PUBLIC KEY-----

85579
dev/cosign/olproperties.spdx Executable file

File diff suppressed because it is too large Load Diff

1
dev/crate/alter_table Executable file
View File

@@ -0,0 +1 @@
ALTER TABLE iss SET ("blocks.read_only_allow_delete" = FALSE)

View File

@@ -0,0 +1,10 @@
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: dev-crate
title: Crate (dev)
spec:
type: service
lifecycle: production
owner: allarddcs
subcomponentOf: component:default/DEV-cluster

97
dev/crate/controler.yaml Executable file
View File

@@ -0,0 +1,97 @@
kind: StatefulSet
apiVersion: "apps/v1"
metadata:
# This is the name used as a prefix for all pods in the set.
name: crate
spec:
serviceName: "crate-set"
# Our cluster has three nodes.
replicas: 3
selector:
matchLabels:
# The pods in this cluster have the `app:crate` app label.
app: crate
template:
metadata:
labels:
app: crate
spec:
# InitContainers run before the main containers of a pod are
# started, and they must terminate before the primary containers
# are initialized. Here, we use one to set the correct memory
# map limit.
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
# This final section is the core of the StatefulSet configuration.
# It defines the container to run in each pod.
containers:
- name: crate
# Use the CrateDB 5.1.1 Docker image.
image: crate:5.1.1
# Pass in configuration to CrateDB via command-line options.
# We are setting the name of the node's explicitly, which is
# needed to determine the initial master nodes. These are set to
# the name of the pod.
# We are using the SRV records provided by Kubernetes to discover
# nodes within the cluster.
args:
- -Cnode.name=${POD_NAME}
- -Ccluster.name=${CLUSTER_NAME}
- -Ccluster.initial_master_nodes=crate-0,crate-1,crate-2
- -Cdiscovery.seed_providers=srv
- -Cdiscovery.srv.query=_crate-internal._tcp.crate-internal-service.${NAMESPACE}.svc.cluster.local
- -Cgateway.recover_after_data_nodes=2
- -Cgateway.expected_data_nodes=${EXPECTED_NODES}
- -Cpath.data=/data
volumeMounts:
# Mount the `/data` directory as a volume named `data`.
- mountPath: /data
name: data
resources:
limits:
# How much memory each pod gets.
memory: 512Mi
ports:
# Port 4300 for inter-node communication.
- containerPort: 4300
name: crate-internal
# Port 4200 for HTTP clients.
- containerPort: 4200
name: crate-web
# Port 5432 for PostgreSQL wire protocol clients.
- containerPort: 5432
name: postgres
# Environment variables passed through to the container.
env:
# This is variable is detected by CrateDB.
- name: CRATE_HEAP_SIZE
value: "256m"
# The rest of these variables are used in the command-line
# options.
- name: EXPECTED_NODES
value: "3"
- name: CLUSTER_NAME
value: "my-crate"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeClaimTemplates:
# Use persistent storage.
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

98
dev/crate/crate-storage.yaml Executable file
View File

@@ -0,0 +1,98 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: crate-pv-0
spec:
storageClassName: ""
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: 192.168.40.100
path: /mnt/nfs_share/crate/crate-0
readOnly: false
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: crate-pv-1
spec:
storageClassName: ""
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: 192.168.40.100
path: /mnt/nfs_share/crate/crate-1
readOnly: false
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: crate-pv-2
spec:
storageClassName: ""
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: 192.168.40.100
path: /mnt/nfs_share/crate/crate-2
readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-crate-0
spec:
storageClassName: ""
volumeName: crate-pv-0
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-crate-1
spec:
storageClassName: ""
volumeName: crate-pv-1
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-crate-2
spec:
storageClassName: ""
volumeName: crate-pv-2
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi

4
dev/crate/create-table Executable file
View File

@@ -0,0 +1,4 @@
CREATE TABLE iss (
timestamp TIMESTAMP GENERATED ALWAYS AS CURRENT_TIMESTAMP,
position GEO_POINT
);

19
dev/crate/external-service.yaml Executable file
View File

@@ -0,0 +1,19 @@
kind: Service
apiVersion: v1
metadata:
name: crate-external-service
labels:
app: crate
spec:
# Create an externally reachable load balancer.
type: LoadBalancer
ports:
# Port 4200 for HTTP clients.
- port: 4200
name: crate-web
# Port 5432 for PostgreSQL wire protocol clients.
- port: 5432
name: postgres
selector:
# Apply this to all nodes with the `app:crate` label.
app: crate

15
dev/crate/ingressroute-tls.yaml Executable file
View File

@@ -0,0 +1,15 @@
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: cratedb-tls
spec:
entryPoints:
- websecure
routes:
- match: Host(`cratedb.alldcs.nl`)
kind: Rule
services:
- name: crate-ui
port: 4200
tls:
certResolver: letsencrypt

36
dev/crate/internal-service.yaml Executable file
View File

@@ -0,0 +1,36 @@
kind: Service
apiVersion: v1
metadata:
name: crate-internal-service
labels:
app: crate
spec:
# A static IP address is assigned to this service. This IP address is
# only reachable from within the Kubernetes cluster.
type: ClusterIP
ports:
# Port 4300 for inter-node communication.
- port: 4300
name: crate-internal
selector:
# Apply this to all nodes with the `app:crate` label.
app: crate
---
kind: Service
apiVersion: v1
metadata:
name: crate-ui
labels:
app: crate
spec:
# A static IP address is assigned to this service. This IP address is
# only reachable from within the Kubernetes cluster.
type: ClusterIP
ports:
# Port 4300 for inter-node communication.
- port: 4200
name: crate-web
selector:
# Apply this to all nodes with the `app:crate` label.
app: crate

20
dev/crate/iss.sh Executable file
View File

@@ -0,0 +1,20 @@
# Exit immediately if a pipeline returns a non-zero status
set -e
position () {
curl -s http://api.open-notify.org/iss-now.json |
jq -r '[.iss_position.longitude, .iss_position.latitude] | @tsv';
}
wkt_position () {
echo "POINT ($(position | expand -t 1))";
}
while true; do
crash --hosts 192.168.40.81:4200 \
--command "INSERT INTO iss (position) VALUES ('$(wkt_position)')"
echo 'Sleeping for 1 seconds...'
sleep 10
done

1
dev/crate/select Executable file
View File

@@ -0,0 +1 @@
SELECT "timestamp", "position" FROM "doc"."iss" LIMIT 1000;

View File

@@ -0,0 +1,16 @@
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: dev-defectdojo
title: Defectdojo (dev)
annotations:
backstage.io/kubernetes-label-selector: "app=defectdojo"
links:
- url: https://github.com/AllardKrings/kubernetes/dev/defectdojo
docs:
- url: ./README.md
spec:
type: service
lifecycle: production
owner: allarddcs
subcomponentOf: component:default/DEV-cluster

Some files were not shown because too many files have changed in this diff Show More