ATIX AG
  • Services
    • Consulting
      • Linux Platform Operations​
      • Infrastructure Automation
      • Container Platforms and Cloud
      • DevOps Processes, Tooling and Culture
      • Cloud Native Software Development
    • Products
      • orcharhino
        • About orcharhino
        • Support
        • orcharhino operation
      • HANGAR
        • About HANGAR
        • Start now for free with HANGAR
        • HANGAR Documentation
        • HANGAR Roadmap
        • HANGAR Changelog
        • HANGAR Community
    • Technologies
      • Ansible
      • AWX and Ansible Automation Platform
      • Docker
      • Foreman
      • GitLab
      • Istio
      • Kubernetes
      • Linux Distributions
      • OpenShift
      • Puppet
      • OpenVox
      • Rancher
      • Rundeck
      • SaltStack
      • SUSE Manager
      • Terraform
  • Trainings
    • Ansible Training
    • AWX Training
    • Docker & Container Training
    • Git Training
    • Go Training (Golang)
    • Istio Training
    • Kubernetes Training
    • OpenShift Training
    • orcharhino Training
    • Puppet Trainings
    • Terraform Training
  • Events
    • Webinars
  • Blog
  • Company
    • About Us
    • References
    • Corporate values
    • Social engagement
    • Newsroom
    • Newsletter
    • Contact us
  • Career
  • German
  • Click to open the search input field Click to open the search input field Search
  • Menu Menu
HANGAR Kubernetes

HANGAR Dokumentation

Inhaltsverzeichnis

  • HANGAR Dokumentation
    • Quick Start
      • Installationsvoraussetzung
      • Vorbereitung der HANGAR Images
      • Helm Installation
      • Alternative Installation ohne Authentifizierung
    • Configuration
      • IAM Nutzerverwaltung – Keycloak
      • HANGAR – Nutzerrechte
      • HANGAR – Infrastruktur Cluster
      • Übersicht Infra Cluster
      • Anbinden Worker Nodes
    • Managed HANGAR Cluster
      • Cluster Management

Quick Start

Installation von HANGAR über das Helm Chart.

Installation Requirements

  • K8s-Cluster
    • DNS
    • Cert-manager
      • Issuer to create valid TLS-Cerst
    • Ingress
    • (optional) CSI-Storage
  • Image Registry
  • optional:
    • IAM Provider
      • An OIDC provider with an OIDC Hangar client
      • OIDC tokens must support the profile, email, and roles scopes

Vorbereitung der HANGAR Images

Im Tarball enthalten sind die notwendigen HANGAR Images. Diese müssen vorab in eine Image-Registry importiert werden.

docker image import  file - [REPOSITORY[:TAG]]
docker image import hangar-images/backend-config-service.tar   registry:5000/hangar/backend/config-service:10.25
docker image import hangar-images/backend-dashboard-service.tar registry:5000/hangar/backend/dashboard-service:10.25
docker image import hangar-images/backend-management-service.tar registry:5000/hangar/backend/management-service:10.25
docker image import hangar-images/backend-node-service.tar registry:5000/hangar/backend/node-service:10.25
docker image import hangar-images/backend-oidc-provider.tar registry:5000/hangar/backend/oidc-provider:10.25
docker image import hangar-images/frontend-ui.tar registry:5000/hangar/frontend/ui:10.25

Helm Installation

Die HANGAR Installation erfolge via Helm Chart.

helm install hangar hangar-0.0.0-10-25.tgz -f hangar-values.yaml

In the values file, the following parameters must be set to perform a successful installation with authentication enabled.

The values file shown below installs a Keycloak instance as the IAM provider.

Important:
The Keycloak configuration is not production-ready and only serves to provide a simple IAM provider for testing purposes.
Ideally, HANGAR should be integrated with an existing IAM provider.

global: 
  hangarHost: "<URL-der Hangar Installtion>"
hangarSuperAdmins:
- hangar-super-admin@example.com # Mail Adressen der IAM Nutzer die SuperAdmin-Rechte in Hangar erhalten sollen

# Secrets
secrets:
  # base64 data of a client id
  oAuthClientID: ""
  # base64 data of client secret
  oAuthClientSecret: ""
ingress:
  className: "nginx"
  annotations:
    cert-manager.io/issuer: "<Issuer for TLS Certs>" # predefined Cert-Issuer
keycloak:
  enabled: true
  database:
    # place a secret in keycloak namespace with usernameKey and passwordKey data and set values.
    # this user will be used as keycloak database user
    existingSecret: keycloak-postgresql
  configCli:
    enabled: true
    # this will create a keycloak client for hangar with example users
    # Use only for testing
    createHangarClientConfig: 
      enabled: true
  deployTheme: false
  ingress:
    enabled: true
    ingressClassName: nginx
    annotations: 
      cert-manager.io/issuer: "<Issuer for TLS Certs>" # predefined Cert-Issuer
      # nginx.ingress.kubernetes.io/proxy-buffer-size: 128k
      # cert-manager.io/cluster-issuer: letsencrypt-prod
    # set host for keycloak if it is different from hangar uri
    # host: <keyclaok host uri if different from hangar host>
    httpRelativePath: "/_keycloak/"
    tls: true

Additionally, the image registry must also be specified:

frontend:
  image: 
    repository: registry:5000/hangar/frontend/ui
    tag: "10.25"
oidcProvider:
    image:
    repository: registry:5000/hangar/backend/oidc-provider
    tag: "10.25"
dashboardService:
  image:
    repository: registry:5000/hangar/backend/dashboard-service
    tag: "10.25"
managementService:
  image:
    repository: registry:5000/hangar/backend/management-service
    tag: "10.25"
nodeService:
  image:
    repository: registry:5000/hangar/backend/node-service
    tag: "10.25"
configService:
  image:
    repository: registry:5000/hangar/backend/config-service
    tag: "10.25"

Alternative Installation without Authentication

Without Keycloak and an IAM provider, HANGAR can be installed by setting the disabledAuth flag.

Important:
This disables authentication for the HANGAR UI, making it publicly accessible.

helm install hangar hangar-0.0.0-10-25.tgz --set global.hangarHost="<URL-der Hangar Installtion>" --set disableAuth=true

HANGAR UI lässt sich über <URL-der Hangar Installtion> aufrufen.

Configuration

IAM User Management – Keycloak

Users can be managed via the IAM Provider. For the bundled Keycloak you can open:

https:///<Hangar_Install_URL>/_keycloak/admin/master/console/#/hangar/users

By default, the Helm chart deploys the following users:

hangar-super-admin@example.com : change-super-admin-password
hangar-cluster-admin@example.com : change-cluster-admin-password
hangar-cluster-user@example.com : change-cluster-user-password

Important:
After installation, change the passwords for these users in the Keycloak UI!

HANGAR – Nutzerrechte

HANGAR Super Admin

Die HANGAR SuperAdmins werden über das Helm Chart konfiguriert unter dem Key wird ein Array mit Super Admin Nutzern aufgelistet

hangarSuperAdmins:
- hangar-super-admin@example.com

Super Admins have the permission to create new clusters and configure the infrastructure cluster.

HANGAR Cluster Admin

HANGAR Cluster Admins are configured through the cluster settings in the Hangar UI.
They can manage their respective clusters and add compute resources.

HANGAR Cluster User

HANGAR Cluster Users are also configured through the cluster settings in the Hangar UI.
Users can view their assigned clusters and download a Kubeconfig for each cluster.

HANGAR Cluster Users When Creating a Managed Cluster
Cluster Users When Creating a Managed Cluster

HANGAR – Infrastruktur Cluster

HANGAR startet nach der Installation einen Infrastruktur Cluster, dieser verwaltet und bietet den Managed Clustern virtuelle Compute Ressourcen die einfach hinzugefügt werden können.

Kubernetes worker nodes are connected to the infrastructure cluster, and these nodes supply the virtual compute resources for the managed clusters.

Infrastructure Cluster Overview

Als HANGAR Super Admin kann man die Überblicksseite des InfrastrukturClusters einsehen und sehen an welchen Managed Cluster diese Virtuellen Nodes angebunden sind.

Infrastruktur Cluster Overview

Connecting Worker Nodes

To provide virtual compute resources, kubernetes worker nodes must be connected to the infrastructure cluster.
For this, the Ansible playbook included in the tarball should be used.

Vorab müssen aus der HANGAR Installation Informationen zu dem Infrastruktur Cluster abgerufen werden.

kubectl describe namespace --selector=hangar.atix.de/is-infrastructure=true 

Name:         default-infrastructure-cluster-btqc6
Labels:       app.kubernetes.io/managed-by=hangar
              app.kubernetes.io/part-of=k8s-control-plane
              hangar.atix.de/is-infrastructure=true
              hangar.atix.de/k8s-version=1.32.9
              kubernetes.io/metadata.name=default-infrastructure-cluster-btqc6
Annotations:  hangar.atix.de/control-plane-node-port: 31676
              hangar.atix.de/konnectivity-proxy-node-port: 32140
Status:       Active

No resource quota.

No LimitRange resource.

These details must be added to the Ansible inventory.

all:
  vars:
    hangar_subcluster_namespace: "default-infrastructure-cluster-btqc6" #  kubernetes.io/metadata.name aus dem Namespace
    kubernetes_worker_pod_cidr: "10.32.0.0/17"
    kubernetes_worker_version: "1.32.9"
    kubernetes_api_server_endpoint: "https://<Hangar Installations URL>:31676" # hangar.atix.de/control-plane-node-port aus dem Namespace
    kubernetes_worker_konnectivity_server_ip: "<Hangar Installations URL>" 
    kubernetes_worker_konnectivity_server_port: "32140" # hangar.atix.de/konnectivity-proxy-node-port aus dem Namespace
  hosts: 
    <definition der hinzuzufügenden Kubernetes Worker Nodes>

Under the hosts:: key, the worker nodes are listed.

all:
  vars: 
    <siehe oben>
  hosts:
    node-1:
      ansible_host: <IP der node>
      ansible_user: root
      kubernetes_worker_node_name: "node-1" # name im Infrastruktur Cluster
      kubernetes_worker_local_pod_cidr: "10.32.0.0/24" # Pod CDIR für diese Node
    node-2:
      ansible_host: <IP der node>
      ansible_user: root
      kubernetes_worker_node_name: "node-2" # name im Infrastruktur Cluster
      kubernetes_worker_local_pod_cidr: "10.32.1.0/24" # Pod CDIR für diese Node

Any number of nodes can be listed and connected. Node names can be freely chosen, but a sequential numbering is recommended. For the Pod CIDR, the third octet must be incremented, and each worker node requires its own non-overlapping network.

Durch das ausführen des Playbooks werden die Nodes dem Infracluster hinzugefügt.
Wichtig ist dabei das kubectl vorhanden ist und die aktuelle kubeconfig auf den Cluster zeigt in dem HANGAR installiert wurde.

export KUBECONFIG=~/.kube/config    
kubectl get pods -n hangar                                               
NAME                                         READY   STATUS      RESTARTS   
hangar-config-service-68b8878-tvtgk          1/1     Running     0          
hangar-dashboard-service-77b9dfc5d6-4jdv4    1/1     Running     0          
hangar-documentation-6d6d559b6b-jk2g8        1/1     Running     0          
hangar-frontend-5f66844984-brpnp             1/1     Running     0          
hangar-keycloak-7466b8895d-lf742             1/1     Running     0          
hangar-keycloak-postgres-6cb5879cb5-2vncc    1/1     Running     0          
hangar-management-service-7ccb94bdbb-jlcs2   1/1     Running     0          
hangar-node-service-76b7c664d8-qwd4t         1/1     Running     0          
hangar-oidc-provider-6fbd48cdc9-8f782        1/1     Running     0          
robots-deployment-6cf9654ddb-sssfn           1/1     Running     0          

The playbook can then be executed from the playbook directory as follows.

ansible-playbook -i inventory.yaml playbook.yaml 

Managed HANGAR Cluster

Cluster Management

Cluster Management als HANGAR-Superadmin.

Creating a Cluster

To create a managed cluster, simply click the “New Cluster” button.
In the modal, provide a name for the cluster.

Create Managed Cluster
New Cluster Details

After creating the cluster, the Cluster Overview displays all control-plane components and their status.

Cluster Details

Cluster Config

Permissions for the managed clusters are managed through the Manage Clusters menu.
Here, you can also configure the Kubernetes version and the compute nodes for each cluster.

Manage Cluster Page

Cluster User Management

By clicking the edit (pencil) icon, you can add admins and users to the cluster.

Add User to Managed Cluster

User können in der HANGAR-UI Cluster sehen. Admins können zusätzlich Cluster verwalten. Weitere Nutzer hinzufügen oder entfernen sowie Compute Nodes verwalten.

Connecting Cluster Worker Nodes

Um Workloads ausführen und Pods starten zu können müssen Worker Nodes an den Managed Cluster angebunden werden.
Dazu auf der “Manage Cluster” Scale Nodes anklicken und die Anzahl der gewünschten Nodes wählen.
HANGAR wird die gewählten Virtuellen Nodes dem Cluster joinen – vorausgesetzt im gewählten InfrastrukturCluster sind genug Compute Ressourcen vorhanden.

Scale compute nodes

Alternatively, you can also connect your own worker nodes using the standard Kubernetes tools.

Important Links

  • HANGAR – Kubernetes Cluster Management
    • Start now for free with HANGAR
    • HANGAR Documentation
    • HANGAR Data Sheet
    • Webinar: Rethinking Kubernetes
    • HANGAR Roadmap
    • HANGAR Changelog
    • HANGAR Community
ISO 27001 Certified Download ISO 27001 Certificate
ISO 9001 Certified Download ISO 9001 Certificate
Newsletter
Never miss anything again. Sign up for the ATIX newsletter!
Sign up now
Blog
  • Blog Start Page
  • ATIX Insights
  • Cloud Native
  • Container Plattformen und Cloud
  • DevOps
  • Hangar
  • Infrastructure Automation
  • Linux Platform Operations
  • orcharhino
Privacy & Legal

Privacy Policy

Imprint

Terms and Conditions

B2B

Twitter     Facebook    LinkedIn    Youtube     mastodon=

© Copyright – ATIX AG

Scroll to top Scroll to top Scroll to top