How Blocking Port 80 Made My Time API Project More Secure (And More Annoying)

At some point, when the deadline for the assessment passed, it turned into a project for me. Trying to accomplish each of the tasks helped me to learn how to implement ideas and practices I only knew in theory.

For this article, we’ll talk about using…


This content originally appeared on DEV Community and was authored by Great-Victor Anjorin

At some point, when the deadline for the assessment passed, it turned into a project for me. Trying to accomplish each of the tasks helped me to learn how to implement ideas and practices I only knew in theory.

For this article, we'll talk about using a domain for my Time API project. My implementation can be found in the namecom_domain branch of the project's repository. It was actually the main branch before the domain I was using expired. I wanted my API to be accessible from a simple URL.

Before my access to the domain expired, I was able to get a working implementation. To grasp how I achieved that, you'll have to understand a couple of the decisions I made for the project first.

Ingress Routing and Network Security Decisions Shaping Version of the Project

The first decision was to deploy an ingress controller to help with exposing the API, using a Terraform module that deploys an NGINX Ingress Controller Helm chart.

module "nginx-controller" {
 source  = "terraform-iaac/nginx-controller/helm"
 version = ">=2.3.0"

 create_namespace = true
 namespace        = "nginx-ingress"

 depends_on = [azurerm_kubernetes_cluster.time_api_cluster]
}

What is an Ingress Controller? It acts as a manager for all external traffic to microservices intended to be exposed outside the cluster. Its core function is to implement the routing rules defined in Ingress resources.

It handles advanced networking tasks, such as distributing user requests across instances of an application (Layer 7 load balancing), securing and decrypting data connections between users and servers (SSL/TLS termination), and directing traffic to different services using separate unique URL paths on the same website domain (host/path-based routing).

For my specific setup, I created an ingress Resource that defined the following routing rule: when external traffic arrives at the domain api.mywonder.works and the path /time, it should be routed to the API.

resource "kubernetes_ingress_v1" "time_api" {
 metadata {
   name      = "time-api-ingress"
   namespace = "time-api"
   annotations = {
     "cert-manager.io/cluster-issuer" = "certmanager"
   }
 }

 spec {
   ingress_class_name = "nginx"

   tls {
     hosts       = ["api.mywonder.works"]
     secret_name = "time-api-tls"
   }

   rule {
     host = "api.mywonder.works"
     http {
       path {
         path      = "/time"
         path_type = "Prefix"
         backend {
           service {
             name = kubernetes_service_v1.time_api.metadata[0].name
             port {
               number = kubernetes_service_v1.time_api.spec[0].port[0].port
             }
           }
         }
       }
     }
   }

   # Added a Default rule (no host) because my domain expired and I need to use the public IP for now

   rule {
     http {

       path {
         path      = "/time"
         path_type = "Prefix"
         backend {
           service {
             name = kubernetes_service_v1.time_api.metadata[0].name
             port {
               number = kubernetes_service_v1.time_api.spec[0].port[0].port
             }

           }
         }
       }

     }

   }

 }

 depends_on = [kubernetes_service_v1.time_api, time_sleep.wait_for_nginx]
}

Without a domain, the default access method would have been directly through the IP address assigned by the Cloud Service Provider as an endpoint for ingress.

My second decision was to restrict access to the cluster through Port 80 at the Virtual Private Cloud Network level by denying all inbound traffic except on Port 443 and from within the Virtual Network itself, using Network Security Group (NSG) security rules. This prevented unencrypted traffic to the API and minimized the overall attack surface of my setup.

resource "azurerm_network_security_group" "time_api_nsg" {
 name                = "nsg-${azurerm_resource_group.time_api_rg.name}"
 resource_group_name = azurerm_resource_group.time_api_rg.name
 location            = azurerm_resource_group.time_api_rg.location

 security_rule {
   name                       = "allow-https-access"
   priority                   = 100
   direction                  = "Inbound"
   access                     = "Allow"
   protocol                   = "Tcp"
   source_port_range          = "*"
   destination_port_range     = "443"
   source_address_prefix      = "*"
   destination_address_prefix = "*"
 }

 security_rule {
   name                       = "allow-vnet-inbound"
   priority                   = 102
   direction                  = "Inbound"
   access                     = "Allow"
   protocol                   = "*"
   source_port_range          = "*"
   destination_port_range     = "*"
   source_address_prefix      = "VirtualNetwork"
   destination_address_prefix = "*"

 }

 security_rule {
   name                       = "deny-all-inbound"
   priority                   = 4096
   direction                  = "Inbound"
   access                     = "Deny"
   protocol                   = "*"
   source_port_range          = "*"
   destination_port_range     = "*"
   source_address_prefix      = "*"
   destination_address_prefix = "*"
 }
}

This decision alone made the project a little more complex—in other words, I made it much harder for myself.
How hard? Well, now I couldn't use the http-01 validation method to get an SSL/TLS Certificate from Let's Encrypt.
For clarity, let me explain what an SSL certificate and a certificate authority is.

An SSL/TLS Certificate authenticates that a user's browser is truly connecting to your API and not a malicious third party, and it enables the use of the HTTPS protocol — which encrypts all data transmitted between the user's browser and my API's host.

It also reassures users that the connection is secure, since major browsers display a padlock icon in the address bar for websites that have a certificate from a trusted Certificate Authority like Let's Encrypt.

A Certificate Authority is a trusted third party organisation that issues SSL/TLS Certificates. The different ways to get a certificate issued are called validation methods.

Validation Challenge

So why couldn’t I use the http-01 validation method? Http-01 requires access to port 80 on the host server. Since I restricted that, I had to use a different validation method. I opted for dns-01 and that required a few things:

  • A DNS Provider: The DNS-01 method works by automatically adding a specific TXT record to a DNS Provider's records. To do this programmatically, I needed a DNS provider that Cert-Manager could interact with. I used Name.com since they also had an API that I could use to interact with their services.
  • Permissions to Modify DNS Records: This is where the Name.com API came in. Cert-Manager needed the authority to create and delete the temporary DNS records required for the challenge. I had to securely store and provide these credentials for accessing the Name.com API to the cluster for the validation to work.
  • The Cert-Manager Webhook: The dns-01 challenge is not natively supported for all DNS providers. To enable this functionality for my Name.com domain, I had to deploy a custom webhook—an external component that extends Cert-Manager to support my specific DNS provider.

Thankfully, I didn't have to create the webhook by myself. The credit for that goes to someone named Ian Grant. There was a Helm chart available in a GitHub repository of theirs. There is always a risk of it not being maintained, and I intend to learn how to create my own, but it was a huge help.

Implementation with Terraform

Now that you understand my decisions a little, I can better explain my implementation.

Since I am relying on Infrastructure as Code (IaC) with Terraform, I was able to define and automate the entire setup in a reproducible way. In the namecom_domain branch, the core infrastructure provisioning is separated from the application deployment — there's a dedicated microservices/ folder holding the deployment-specific configs like deploy.tf.

This let me provision the AKS cluster, networking, and base add-ons first, then layer on the API deployment and cert management in a second Terraform apply step. I structured it this way to ensure the infrastructure was provisioned correctly before the application deployed—minimizing errors in the first workflow run.

Helm, as a package manager for Kubernetes, made deploying Cert-Manager Controller straightforward. The Cert-Manager Controller manages the issuance and renewal of SSL/TLS certificates in the cluster. I used Helm to install the Cert-Manager Controller and a ClusterIssuer resource, as defined in the provision.tf file.

resource "helm_release" "cert_manager" {
 name       = "cert-manager"
 repository = "https://charts.jetstack.io"
 chart      = "cert-manager"
 version    = "v1.5.4"
 create_namespace = true
 namespace        = "cert-manager"

 set {
   name  = "installCRDs"
   value = "true"
 }

 timeout = 600

 depends_on = [module.nginx-controller]

}

resource "helm_release" "cert_manager_issuers" {
 chart      = "cert-manager-issuers"
 name       = "cert-manager-issuers"
 version    = "0.3.0"
 repository = "https://charts.adfinis.com"
 namespace  = "cert-manager"

 # https://acme-staging-v02.api.letsencrypt.org/directory
 values = [
   <<-EOT

 clusterIssuers:
  - name: certmanager
    spec:
      acme:
        email: "greatvictor.anjorin@gmail.com"
        server: "https://acme-v02.api.letsencrypt.org/directory"
        privateKeySecretRef:
          name: certmanager
        solvers:
          - dns01:
              webhook:
                groupName: acme.name.com
                solverName: namedotcom
                config:
                  username: "${var.namecom_username}"
                  apitokensecret:
                    name: namedotcom-credentials
                    key: api-token               

EOT

 ]

 depends_on = [helm_release.cert_manager, kubernetes_secret_v1.namecom_api_token]
}

I configured the ClusterIssuer to use Let's Encrypt's ACME server. The issuer is cluster-wide—meaning it can issue certificates for any namespace. Key configs include setting the ACME server URL to the production endpoint (https://acme-v02.api.letsencrypt.org/directory) and defining the dns-01 solver.

For the solver, I pointed it to the custom Name.com webhook, which handles the actual Name.com API calls to add and remove TXT records during validation. I installed both the controller and the issuer via Helm release resources in the provision.tf file.

As for the webhook, I deployed it as another Helm release in the same provision.tf file. In the GitHub Actions workflow, I used the actions/checkout@v4 action to clone Ian Grant's Name.com webhook GitHub repository into a local webhook/ directory. The Helm release then references the chart using a relative path (../webhook/deploy) to the locally cloned repository during the workflow run.

- name: Checkout Name.com webhook Github repository
     uses: actions/checkout@v4

     with:
       repository: imgrant/cert-manager-webhook-namecom
       path: webhook
resource "helm_release" "namecom_webhook" {
 name       = "namecom-webhook"
 repository = "../webhook/deploy"
 chart      = "cert-manager-webhook-namecom"
 namespace  = "cert-manager"

 depends_on = [helm_release.cert_manager]

}

The webhook runs in the cert-manager namespace and needs access to Name.com API credentials. I provided these securely by creating a Kubernetes Secret for the sensitive API token, populated with a Terraform variable that gets its value from DOMAIN_API_TOKEN in GitHub Secrets during the Actions workflow.

- name: Apply Terraform configuration
     env:
       ARM_CLIENT_ID: ${{ secrets.ARM_CLIENT_ID }}
       ARM_CLIENT_SECRET: ${{ secrets.ARM_CLIENT_SECRET }}
       ARM_SUBSCRIPTION_ID: ${{ secrets.ARM_SUBSCRIPTION_ID }}
       ARM_TENANT_ID: ${{ secrets.ARM_TENANT_ID }}

       USER_OBJECT_ID: ${{ secrets.MY_USER_OBJECT_ID }}
       USERNAME: ${{ secrets.DOMAIN_API_USERNAME }}
       TOKEN: ${{ secrets.DOMAIN_API_TOKEN }}
     run: |
       cat <<EOF > terraform.tfvars.json
       {
         "my_user_object_id": "$USER_OBJECT_ID",
         "namecom_username": "$USERNAME",
         "namecom_token": "$TOKEN"
       }

       EOF
       terraform apply --auto-approve
     working-directory: ./terraform

resource "kubernetes_secret_v1" "namecom_api_token" {

 metadata {
   name      = "namedotcom-credentials"
   namespace = "cert-manager"
 }

 data = {
   api-token = var.namecom_token
 }

 type = "Opaque"

depends_on = [helm_release.namecom_webhook]

}

For the username, I took a slightly different approach. It is still stored in GitHub Secrets and passed to Terraform as a variable, but instead of creating a Kubernetes Secret, I injected it directly into the values block of the ClusterIssuer Helm release.

This approach ensures the Cert-Manager Controller can authenticate with Name.com via the webhook, without exposing credentials in code.

Once Cert-Manager and the webhook are in place, the magic happens in the Ingress resource defined in microservices/deploy.tf. This file deploys the actual Time API as a Kubernetes Deployment and Service, then creates an Ingress to route traffic.

The Ingress spec includes a host rule for api.mywonder.works, routing the /time path to the backend service on port 5000. To trigger automatic certificate issuance, the annotation cert-manager.io/cluster-issuer: certmanager ensures that Cert-Manager:

  • watches the time-api Ingress resource for changes,
  • requests certificates for hosts listed in the tls block (in our case, api.mywonder.works) via the certmanager ClusterIssuer using the challenge type specified in the ClusterIssuer resource (dns-01),
  • stores the issued certificate in the specified secret (time-api-tls),
  • and enables TLS termination at the ingress controller level.

Under the hood, when you apply this, Cert-Manager creates a Certificate resource, initiates the ACME challenge with Let's Encrypt, and uses the webhook to temporarily add a TXT record to your Name.com DNS zone (something like, in my case, _acme-challenge.api.mywonder.works). Let's Encrypt verifies the record to confirm domain ownership, issues the certificate, and Cert-Manager stores it as a Secret mounted to the Ingress. Renewals happen automatically before the 90-day expiration, with the same process repeating seamlessly.

This implementation is carefully scripted out in the GitHub Actions workflow defined in build.yaml. The workflow first builds and pushes the Docker image of the Flask app to Docker Hub, then uses Terraform to provision the infrastructure. It populates a terraform.tfvars.json file with GitHub Secrets (domain and Name.com credentials) that I created before triggering the workflow, to avoid hardcoding sensitive info.

After the base infrastructure is up, it moves the microservices/deploy.tf into the root and runs another terraform apply to deploy the app and Ingress.

Like I explained earlier, this staged approach ensures dependencies like the cluster and Cert-Manager are ready before certificate issuance kicks in.

If you want to try this with your own Name.com domain, once deployed, you can hit https://api.YOUR_DOMAIN_HERE/time (note the HTTPS) in your browser or via curl, and you'll get the current UTC time securely. If things go wrong—say, DNS propagation delays or webhook issues—you can troubleshoot with kubectl get certificates to check certificate status, or dive into Cert-Manager logs with kubectl logs -n cert-manager.

Conclusion

This setup not only made my API accessible via a clean, branded URL but also enforced HTTPS-only access without manual cert management. It was a bit of extra work upfront due to the Port 80 restriction, but the automation pays off for scalability and security.

If you're adapting this, remember to update details like the domain in the time-api Ingress resource and email in the cluster issuer resource, and ensure your Name.com API token has the right permissions.

The branch's README has more on prerequisites and manual deployment if you want to tinker locally before going full CI/CD.

Overall, I was forced to delve a little deeper into some cloud-native practices and how to make them happen, and I'm glad I pushed through the complexities—domain expirations aside!


This content originally appeared on DEV Community and was authored by Great-Victor Anjorin


Print Share Comment Cite Upload Translate Updates
APA

Great-Victor Anjorin | Sciencx (2025-09-30T16:39:12+00:00) How Blocking Port 80 Made My Time API Project More Secure (And More Annoying). Retrieved from https://www.scien.cx/2025/09/30/how-blocking-port-80-made-my-time-api-project-more-secure-and-more-annoying/

MLA
" » How Blocking Port 80 Made My Time API Project More Secure (And More Annoying)." Great-Victor Anjorin | Sciencx - Tuesday September 30, 2025, https://www.scien.cx/2025/09/30/how-blocking-port-80-made-my-time-api-project-more-secure-and-more-annoying/
HARVARD
Great-Victor Anjorin | Sciencx Tuesday September 30, 2025 » How Blocking Port 80 Made My Time API Project More Secure (And More Annoying)., viewed ,<https://www.scien.cx/2025/09/30/how-blocking-port-80-made-my-time-api-project-more-secure-and-more-annoying/>
VANCOUVER
Great-Victor Anjorin | Sciencx - » How Blocking Port 80 Made My Time API Project More Secure (And More Annoying). [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/09/30/how-blocking-port-80-made-my-time-api-project-more-secure-and-more-annoying/
CHICAGO
" » How Blocking Port 80 Made My Time API Project More Secure (And More Annoying)." Great-Victor Anjorin | Sciencx - Accessed . https://www.scien.cx/2025/09/30/how-blocking-port-80-made-my-time-api-project-more-secure-and-more-annoying/
IEEE
" » How Blocking Port 80 Made My Time API Project More Secure (And More Annoying)." Great-Victor Anjorin | Sciencx [Online]. Available: https://www.scien.cx/2025/09/30/how-blocking-port-80-made-my-time-api-project-more-secure-and-more-annoying/. [Accessed: ]
rf:citation
» How Blocking Port 80 Made My Time API Project More Secure (And More Annoying) | Great-Victor Anjorin | Sciencx | https://www.scien.cx/2025/09/30/how-blocking-port-80-made-my-time-api-project-more-secure-and-more-annoying/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.