This content originally appeared on Bits and Pieces - Medium and was authored by Ashan Fernando
How to deploy Microservice Components into Kubernetes

Over the years, Microservices have proven to be an effective solution to mitigate the challenges of large-scale monolithic architectures. Giving complete autonomy to Microservice teams makes them productive and delivers functionality in rapid succession. However, for the same reason, it limits collaboration across teams in favour of autonomy. As a result, reusing code across Microservices is minimal and, in most cases, non-existent.
This is where Composable Microservices come into the picture. It makes code reuse a fundamental practice across the entire organization, yet it offers the flexibility and autonomy of Microservices.
By enabling components to be reused and composed in different services, Composable Microservices ensure that teams can maintain their independence while still benefiting from shared resources and consistent implementations. This approach enhances collaboration and efficiency and promotes innovation and agility, allowing organizations to quickly adapt to changing requirements and deliver high-quality solutions.

Creating Composable Microservices
If you are setting up a Composable Microservices project from scratch, you can use Bit's platform starter. It provides the essentials for bootstrapping the project with boilerplate components and the configuration to deploy your Microservices into a Kubernetes cluster.
Note: If you plan to make an existing set of Microservices composable, you can still use this starter template. As the first candidate to componentize, identify cross-cutting codes, such as authentication, validation, etc. Then, map them into Microservice components created inside the platform.
Let’s examine how to create your first fleet of Composable Microservices deployable into Kubernetes.
Step 1: Use the Platform Starter Template
After installing Bit CLI, run the following command on your local machine.
bit new platform-starter my-new-workspace --aspect teambit.community/starters/platform-starter --default-scope myorg.myscope
This platform starter template creates a new workspace and set of example components for you to work with.

Step 2: Update the Docker Image Configuration
If you look at frontend and backend service components, you can find a file named <component-name>.bit-app.ts . This file contains the configuration to publish each service into a docker image.
// @file acme-web.bit-app.ts
import { ReactSsr } from "@bitdev/react.app-types.react-ssr";
import { DockerDeploy, NodeDockerFile } from "@backend/docker.docker-deployer";
export default ReactSsr.from({
name: "acme-web",
ssr: true,
serverRoot: "server.app-root.js",
clientRoot: "acme-web.app-root.js",
deploy: DockerDeploy.deploy({
org: "bitdevcommunity",
buildOptions: {
platform: "linux/amd64",
},
pushOptions: {
authconfig: {
username: "bitdevcommunity",
password: process.env.DOCKER_PASSWORD || "",
serveraddress: "https://index.docker.io/v1",
},
},
dockerfileTemplate: new NodeDockerFile(),
entryFile: "server.cjs",
}),
});
You can modify the org/service name which will be mapped into the docker organization/image-name in the docker registry (e.g Docker Hub).
Each of them uses a component named docker-deployer . It handles the publishing of container images. The following article provides more details about it.
How to Dockerize Your Composable Architecture
Once you modify any frontend or backend service, Bit will build its docker image and publish it to the docker registry as defined in the configuration.
Gateway Server
The gateway-server also has its docker configuration in default-gateway.ts file. This service works as the micro-gateway that acts as the entry point for backend services.
** @filename: gateway-server.bit-app.ts */
// ...
import { Platform } from '@bitdev/platforms.platform';
const UserServer = import.meta.resolve(
'@k8test/platform.backend-services.user-server'
);
const DiscussionServer = import.meta.resolve(
'@k8test/platform.backend-services.discussion-server'
);
const PlatformGateway = import.meta.resolve(
'@k8test/platform.core.gateway-server'
);
export const AcmePlatform = Platform.from({
// ...
backends: {
main: PlatformGateway,
services: [
[
UserServer,
{ name: 'user-server',
remoteUrl: 'http://user-server-service'
},
],
[
DiscussionServer,
{ name: 'discussion-server',
remoteUrl: 'http://discussion-server-service',
},
],
],
},
// ...
}),
});
export default AcmePlatform;
Step 3: Run the Applications Locally for Development
One of the main advantages of the composable Microservices platform is that you can run all the services locally for development. It can plug in Microservices and Microfrontends together for a better developer experience.
To run the platform locally, you can execute the following command.
bit run platform
If we look at the platform.bit-app.ts file inside the platform component, we can find the configuration of how it orchestrates all these services together.
// @file platform.bit-app.ts
import { Platform } from '@bitdev/platforms.platform';
import { KubernetesDeployer } from '@backend/kubernetes.kubernetes-deployer';
import { GKEAdapter } from '@backend/kubernetes.adapters.gke';
const UserServer = import.meta.resolve(
'@k8test/platform.backend-services.user-server'
);
const DiscussionServer = import.meta.resolve(
'@k8test/platform.backend-services.discussion-server'
);
const AcmeWeb = import.meta.resolve('@k8test/platform.frontend-services.acme-web');
const PlatformGateway = import.meta.resolve(
'@k8test/platform.core.gateway-server'
);
export const AcmePlatform = Platform.from({
name: 'platform',
frontends: {
main: AcmeWeb,
},
backends: {
main: [
PlatformGateway,
{
remoteUrl: 'http://gateway-server-service',
},
],
services: [
[
UserServer,
// The remote URL here is the name of the service in the Kubernetes cluster
// To use the remote service from your local dev environment, you can change the remote URL to a valid one.
{ name: 'user-server', remoteUrl: 'http://user-server-service' },
],
[
DiscussionServer,
{
name: 'discussion-server',
remoteUrl: 'http://discussion-server-service',
},
],
],
},
// ...
Step 4: Kubernetes Cluster Configuration
At the end of the same file platform.bit-app.ts, you can find the Kubernetes cluster configuration. In this example, it uses theGKEAdapter to connect to a cluster in Google Cloud.
// @file platform.bit-app.ts
import { GKEAdapter } from '@backend/kubernetes.adapters.gke';
// ...
deploy: KubernetesDeployer.deploy({
adapter: new GKEAdapter({
clusterName: 'acme-platform',
zone: 'us-central1',
keyJson: process.env.K8S_GOOGLE_CLOUD,
}),
organization: 'bitdevcommunity',
}),
});
export default AcmePlatform;
There are several other adapters to choose from.

If you plan to use one of these adapters, such as Amazon EKS, you can install it into your workspace and configure it accordingly.
bit install @backend/kubernetes.adapters.eks
You must modify the organization name to reflect your docker registry organization name. You can override the default config using YAML files or generator components.
Step 5: Publish New Image Versions and Deploy them into Kubernetes
Suppose you modify the frontend or any of the backend services or dependent components. You can tag and export them to the bit.cloud using the following commands.
bit tag -m "modified services"
bit export

Once you export, Ripple CI (Bit’s component-oriented CI/CD) propagates through the dependency graph of modified components, builds relevant docker images, publishes them into the docker registry, and finally deploys the latest image versions into the Kubernetes cluster.

Conclusion
Composable Microservices and Microfrontends go beyond traditional approaches to provide better collaboration between teams.
The local execution support uses the platform microframework and can build Docker images and deploy them into the Kubernetes cluster, handling end-to-end DevOps.
Most importantly, you can share components between different services across teams, reducing duplicated efforts while maintaining autonomy.
As you can see, the Docker and Kubernetes configurations are mostly managed by conventions. If you need customization, create the YAML and Docker files and put them into the respective app root directories.
Thanks for Reading! Cheers!
Learn More
- How to Dockerize Your Composable Architecture
- Composable Architecture CI/CD: Releasing to Production on Every Change
- Composable Software Architectures are Trending: Here’s Why
- Composable Applications: A Practical Guide
Composable Microservices: Deploying into K8 Cluster was originally published in Bits and Pieces on Medium, where people are continuing the conversation by highlighting and responding to this story.
This content originally appeared on Bits and Pieces - Medium and was authored by Ashan Fernando

Ashan Fernando | Sciencx (2024-05-31T16:52:53+00:00) Composable Microservices: Deploying into K8 Cluster. Retrieved from https://www.scien.cx/2024/05/31/composable-microservices-deploying-into-k8-cluster/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.