DEV Community

Cover image for GitOps and IaC at Scale – ArgoCD and Open Tofu – Part 3 – Hardening and Manage users

GitOps and IaC at Scale – ArgoCD and Open Tofu – Part 3 – Hardening and Manage users

Level 400

Earlier blogs covered scalable delivery, architecture patterns, and code samples. This section provides additional production-ready examples and best practices using the Well Architected Framework and GitOps with ArgoCD.

Architecture Overview

Now, suppose that you must apply some custom addons for the environment, also enable on-boarding application developer’s teams and platform teams to manage their projects and applications with the least manual effort, integration with AWS ecosystem like IAM identity Center and apply best practices for ArgoCD setup. For accomplishing this the following image depicts the high-level reference architecture with the main services:

Final GitOps on AWS

Here the new resources are ALB as an ingress gateway for Argo server, Route 53 to enable a Hosted Zone for public Domain, AWS Certificate management for SSL trough the ALB, and IAM Identity Center for SSO capabilities.

For production environments there could be a private hosted Zone and use PKI architecture internal to enable SSL access.

For other hand, *how can you be modeling the groups and access to the Argo projects and applications? *
There is not correct answer and depending on your operational model (Conway’s Law) for example, the following image depicts a simple group for each team around a workload.

GitOps Groups

Assigning permissions should follow the principle of the least privilege, separating application and project actions. For example, the application engineering team can run all actions on Applications but only watch and update the platform and addon applications.

GitOps Teams and permissions

Hands On

It’s time to create code.

First, we must modify the hub blueprint to enable argo cd best practices
❌ Don’t use Default admin for management- Disable it.
❌ Don’t use the default project
❌ Don’t use local users just if it is completely necessary for CLI or external automatons.
✅ Create Roles and assign to groups instead of individual assignments.

Now, according to Argo cd documentation the SSO is configured with IAM Identity Center.

A working Single Sign-On configuration using Identity Center (AWS SSO) has been achieved using SAML (with Dex)

To manage IAM Identity Center in AWS, applications must be created because the CreateApplicationAPI only supports custom OAuth 2.0 apps. Third-party SAML or OAuth 2.0 apps must be set up via their respective services or the AWS console. Application creation cannot be automated with Terraform, but user assignments can still be handled through IaC.

Initially, a custom Terraform module was developed to establish the ArgoCD hardening baseline. This module leverages the gitops bridge module as its foundation and incorporates SSO configuration, along with group and default role setup and baseline specifications.

The module name is terraform-helm-hardening-gitops-bridge

You can find the source code module here 👉

GitHub logo thothforge / terraform-helm-hardening-gitops-bridge

GitOps bridge extended module with hardennig practices and examples

Terraform Hardening GitOps Bridge Module

Terraform AWS Kubernetes ArgoCD

A comprehensive Terraform module that provides a hardened GitOps bridge for Amazon EKS clusters, implementing security best practices and enterprise-grade configurations for GitOps workflows using ArgoCD.

📖 Table of Contents

🚀 Features

Core Capabilities

  • GitOps Integration: Seamless integration with ArgoCD for GitOps workflows
  • Security Hardening: Enterprise-grade security configurations and best practices
  • Multi-Repository Support: Support for addons, platform, and workloads repositories
  • Flexible Deployment: Single cluster or hub-spoke architecture support
  • Comprehensive Addons: Pre-configured essential Kubernetes addons

Security Features

  • RBAC Integration: Role-based access control with customizable permissions
  • SSO Support

or 👉

The main files are: bootstrap/argocd-values.yaml where the default values are created and parser some parameters from terraform parameters like default project name, dns or argo host, kind of ingress, tags, subnets and more.

# bootstrap/argocd-values.yaml
global:
  topologySpreadConstraints:
    - maxSkew: 1
      topologyKey: "topology.kubernetes.io/hostname"
      whenUnsatisfiable: "ScheduleAnyway"
    # Default logging options used by all components
  logging:
    # -- Set the global logging format. Either: `text` or `json`
    format: text
    # -- Set the global logging level. One of: `debug`, `info`, `warn` or `error`
    level: debug



configs:
  params:
    server.insecure: true
  # SSO configuration with Entra ID
  cm:

    url: https://${argo_host}
    dex.config: |
      logger:
        level: debug
        format: json
      connectors:
      - type: saml
        id: aws
        name: "AWS IAM Identity Center"
        config:
          # You need value of Identity Center APP SAML (IAM Identity Center sign-in URL)
          ssoURL: ${sso_assertion_url} #https://portal.sso.yourregion.amazonaws.com/saml/assertion/id
          # You need `caData` _OR_ `ca`, but not both.
          #<CA cert (IAM Identity Center Certificate of Identity Center APP SAML) passed through base64 encoding>
          caData: ${ca_data_iam_app} 
          # Path to mount the secret to the dex container
          entityIssuer: https://${argo_host}/api/dex/callback
          redirectURI: https://${argo_host}/api/dex/callback
          usernameAttr: email
          emailAttr: email
          groupsAttr: groups

  rbac:
    policy.csv: |
      p, role:platform-team, applications, *, */*, allow
      p, role:platform-team, projects, *, */*, allow
      p, role:platform-team, clusters, *, *, allow
      p, role:platform-team, repositories, *, *, allow
      p, role:platform-team, certificates, *, *, allow

      p, role:platform-team, logs, get, */*, allow
      p, role:platform-team, exec, create, */*, allow
      p, role:platform-team, applicationsets, *, */*, allow
      p, role:platform-team, accounts, get, */*, allow
      p, role:platform-team, sessions, create, */*, allow
      p, role:platform-team, sessions, delete, */*, allow
      p, role:platform-team, projects, get, *, allow
      p, role:platform-team, projects, list, *, allow
      p, role:platform-team, projects, update, *, allow
      p, role:platform-team, projects, create, *, allow

      p, role:platform-team, actions, *, */*, allow
      g, ${admin_idp_group_id}, role:platform-team

    policy.default: role:deny
    policy.matchMode: glob
    scopes: '[groups, email]'
server:
  autoscaling:
    minReplicas: 2
    maxReplicas: 20
    targetCPUUtilizationPercentage: 80
    targetMemoryUtilizationPercentage: 80
  serviceAccount:
    annotations:
      eks.amazonaws.com/role-arn: "${argocd_irsa_role_arn}"
  service:
    type:  "NodePort"
  ingress:
    enabled: "${enable_argo_ingress}"
    hostname: "${argo_host}"
    controller: "aws"
    ingressClassName: "alb"
    tls: true
    aws:
      backendProtocolVersion: GRPC
      serviceType: "NodePort" #"ClusterIP"
    annotations:
      alb.ingress.kubernetes.io/group.name: argocd
      alb.ingress.kubernetes.io/backend-protocol: HTTP
      alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80},{"HTTPS":443}]'
      alb.ingress.kubernetes.io/scheme: "${aws_load_balancer_type}"
      alb.ingress.kubernetes.io/security-groups: "${argo_ingress_sg}"
      alb.ingress.kubernetes.io/certificate-arn: "${acm_certificate_arn}"
      alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-2017-01
      alb.ingress.kubernetes.io/ssl-redirect: "443"
      alb.ingress.kubernetes.io/healthcheck-path: /healthz
      alb.ingress.kubernetes.io/subnets: "${ingress_subnets}"
      alb.ingress.kubernetes.io/tags: "${required_tags}"
    path: /

    # -- Ingress path type. One of `Exact`, `Prefix` or `ImplementationSpecific`
    pathType: Prefix
    #extraRules:
    #- http:
    #    paths:
    #    - path: /argocd
    #      pathType: Prefix
    #      backend:
    #        service:
    #          name: 'argo-cd-argocd-server-grpc'
    #          port:
    #            name: 'https'
  ingressGrpc:
    # -- Enable an ingress resource for the Argo CD server for dedicated [gRPC-ingress]
    enabled: false #"${enable_argo_ingress}"
    hostname: "${argo_host}"
    ingressClassName: "alb"
    annotations:
      alb.ingress.kubernetes.io/group.name: argocd
      alb.ingress.kubernetes.io/backend-protocol: HTTPS
      alb.ingress.kubernetes.io/backend-protocol-version: GRPC
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80},{"HTTPS":443}]'
      alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
      alb.ingress.kubernetes.io/target-type: "ip"
      alb.ingress.kubernetes.io/ssl-redirect: "443"
      alb.ingress.kubernetes.io/healthcheck-path: /healthz
      alb.ingress.kubernetes.io/success-codes: "0-99"
      alb.ingress.kubernetes.io/security-groups: "${argo_ingress_sg}"
      alb.ingress.kubernetes.io/certificate-arn: "${acm_certificate_arn}"
      alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-2017-01
      alb.ingress.kubernetes.io/subnets: "${ingress_subnets}"
      alb.ingress.kubernetes.io/tags: "${required_tags}"
controller:
  replicas: 3
  serviceAccount:
    annotations:
      eks.amazonaws.com/role-arn: "${argocd_irsa_role_arn}"
  metrics:
    enable: true
  priorityClassName: "system-node-critical"
  podAnnotations:
    prometheus.io/scrape: true
    prometheus.io/port: 8082
    prometheus.io/path: "/metrics"
  service:
    annotations:
      prometheus.io/scrape: true
      prometheus.io/port: 8082
      prometheus.io/path: "/metrics"


repoServer:
  replicas: 2
  serviceAccount:
    annotations:
      eks.amazonaws.com/role-arn: "${argocd_irsa_role_arn}"
    metrics:
      enable: true
    priorityClassName: "system-node-critical"
    podAnnotations:
      prometheus.io/scrape: true
      prometheus.io/port: 8084
      prometheus.io/path: "/metrics"
    service:
      annotations:
        prometheus.io/scrape: true
        prometheus.io/port: 8084
        prometheus.io/path: "/metrics"


notificationsController:
  enable: true

redis-ha:
  enabled: true
  waitForVotes: false
  quorum: 2

applicationSet:
  metrics:
    enable: true
  replicas: 2
  podAnnotations:
    prometheus.io/scrape: true
    prometheus.io/port: 8085
    prometheus.io/path: "/metrics"
  service:
    annotations:
      prometheus.io/scrape: true
      prometheus.io/port: 8085
      prometheus.io/path: "/metrics"
Enter fullscreen mode Exit fullscreen mode

For other hand, the argo project default template is: bootstrap/default_argproj.yaml

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: ${default_argoproj_name}
  namespace: argocd
  labels:
    argocd.argoproj.io/instance: argoprojects-platform-team
spec:
  description: Team for TI to manage Argo projects and applications
  sourceRepos: ${jsonencode(repositories)}
  destinations:
    - namespace: '*'
      server: '*'
  clusterResourceWhitelist:
    - group: '*'
      kind: '*'
  namespaceResourceWhitelist:
    - group: '*'
      kind: '*'
Enter fullscreen mode Exit fullscreen mode

You can modify according to the baseline’s requirements; this is the platform project like default.

The platform addons and platform appsets just receive the parameter for default project in argoCD but keep as the beginning of this series.

Other kinds of resources for the module are security groups and default secrets, if the SSO is not enabled for platform, for admin, developers and view only users.

For other hand, the module can create a certificate and CNAME in a public hosted zone. Is usual that these resources are managed from other team, but in this setup in the same account exists a delegate hosted zone for internal use. This allows use external DNS addon to enable easy integration and register additional services in your control plane.

Sensitive data like ca data and group id could be loaded from parameter store provided for security team. Or save in secrets manager to mount the secret onto the dex container in the argocd-dex-server Deployment.

The module can be utilized with Terragrunt as follows:

#eks_control_plane-terragrunt.hcl
include "root" {
  path   = find_in_parent_folders("root.hcl")
  expose = true
}

#include "kubectl_provider" {
#  path = find_in_parent_folders("/common/additional_providers/provider_kubectl.hcl")
#}

include "k8s_helm_provider" {
  path = find_in_parent_folders("/common/additional_providers/provider_k8s_helm.hcl")
}

dependency "eks" {
  config_path = "${get_parent_terragrunt_dir("root")}/infrastructure/containers/eks_control_plane"
  mock_outputs = {
    cluster_name                       = "dummy-cluster-name"
    cluster_endpoint                   = "dummy_cluster_endpoint"
    cluster_certificate_authority_data = "dummy_cluster_certificate_authority_data"
    cluster_version                    = "1.31"
    cluster_platform_version           = "1.31"
    oidc_provider_arn                  = "dummy_arn"
    node_security_group_id = "sg-xasr18923d"
    cluster_primary_security_group_id = "sg-xasr18923d"
  }
  mock_outputs_merge_strategy_with_state = "shallow"
}
dependency "eks_role" {
  config_path = "${get_parent_terragrunt_dir("root")}/infrastructure/iam/eks_role"
  mock_outputs = {
    iam_role_arn = "arn::..."
  }
  mock_outputs_merge_strategy_with_state = "shallow"

}
dependency "vpc" {
  config_path = "${get_parent_terragrunt_dir("root")}/infrastructure/network/vpc"
  mock_outputs = {
    vpc_id = "vpc-04e3e1e302f8c8f06"
    public_subnets = [
      "subnet-0e4c5aedfc2101502",
      "subnet-0d5061f70b69eda14",
    ]
    private_subnets = [
      "subnet-0e4c5aedfc2101502",
      "subnet-0d5061f70b69eda14",
      "subnet-0d5061f70b69eda15",
    ]
  }
  mock_outputs_merge_strategy_with_state = "shallow"
}


locals {
  # Define parameters for each workspace
  env = {
    default = {

      environment = "control-plane"
      default_project = include.root.locals.environment.locals.default_project
      enable_argo_ingress = true
      sso_assertion_url = "https://portal.sso.<SSO_REGION>.amazonaws.com/saml/assertion/<id>"
      ca_data= "BASE 64 CA Data"
      argo_host_dns = {
        domain_name            = "argocd.devsecops.labvel.io"
        zone_id                = "Z101221820UZWIPQ27722"
        aws_load_balancer_type = "internet-facing"
        validation = "public"
      }
      oss_addons = {
        enable_argo_workflows = true
        #enable_foo            = true
        # you can add any addon here, make sure to update the gitops repo with the corresponding application set
      }

      addons_metadata = merge(
        include.root.locals.environment.locals.addons_repos_properties
      )


        tags = {
          Environment = "control-plane"
          Layer       = "Containers"
        }
      }

    "dev" = {

      create = true
    }
    "prod" = {

      create = true
    }
  }
  # Merge parameters
  environment_vars = contains(keys(local.env), include.root.locals.environment.locals.workspace) ? include.root.locals.environment.locals.workspace : "default"
  workspace        = merge(local.env["default"], local.env[local.environment_vars])
}


terraform {
  source ="tfr:///thothforge/hardening-gitops-bridge/helm?version=1.0.1"

}

inputs = {
  cluster_name                       = dependency.eks.outputs.cluster_name
  cluster_endpoint                   = dependency.eks.outputs.cluster_endpoint
  cluster_platform_version           = dependency.eks.outputs.cluster_platform_version
  oidc_provider_arn                  = dependency.eks.outputs.oidc_provider_arn
  node_security_group = dependency.eks.outputs.cluster_primary_security_group_id
  vpc_id         = dependency.vpc.outputs.vpc_id
  private_subnet_ids = dependency.vpc.outputs.private_subnets
  public_subnet_ids = dependency.vpc.outputs.public_subnets

  cluster_certificate_authority_data = dependency.eks.outputs.cluster_certificate_authority_data
  enable_argo_ingress = local.workspace["enable_argo_ingress"]
  argo_host_dns = local.workspace["argo_host_dns"]
  #external_dns_domain_filters = [local.workspace["argo_host_dns"]]
  sso_assertion_url = local.workspace["sso_assertion_url"]
  ca_data= local.workspace["ca_data"]

  admin_idp_group_id ="318xx789-x071-70f5-63x6-xx21233xxx33"

  tags = local.workspace["tags"]

}
Enter fullscreen mode Exit fullscreen mode

Finally you can login into argocd hub control plane using the url or through AWS SSO login interface.

Argo CD simple Access
Or
SSO Argo CD App

The upcoming post will include additional customization options and add-ons.
Thanks for reading and sharing.🙌 👽 😃

Top comments (1)

Collapse
 
vincenttommi profile image
Vincent Tommi

great