Loading Posts...

Darkbit MKIT – Managed Kubernetes Inspection Tool

I thought to write about an open source tool called MKIT (Managed Kubernetes Inspection Tool) which I came across and I was able to try it out in one of the AWS EKS and standalone Kubernetes clusters to see the results of a set of inspection tests. I thought to write up about this tool for my followers and share my experience about the customization. I hope this will help anyone who has similar thoughts and requirements of running tests and contributing to the development.

The official website of the Darkbit is:

https://darkbit.io

The main intention of MKIT tool is to identify the misconfigurations of the cluster and the workloads running inside of it. At the time of writing this article Darkbit MKIT tool supports to the major cloud platforms like EKS, AKS, GKE and also the standalone Kubernetes clusters. You can find the official blog post of the release of the tool here.

This tool uses a complete set of open source tools and main component which perform the inspection of the environment uses the open source “Cinc Auditor”, it is a “Free-as-in-Beer” distribution of the open source software of Chef Software Inc. What basically CINC means “CINC INot Chef” and the product was developed with the same code as the original product and only branding is changed.

Read More:

MKIT runs as a Docker Image and separate inspection profiles are setup as separate repos for each platform. Those inspect profiles are pulled during the image building process. Any update or changes to the inspection rules needs to be released with a version number. This release version number should be updated in the Docker file to create the new image.

These are GitHub repositories for the MKIT Tool:

Tests will perform in few steps, for a EKS cluster, it will perform the AWS related tests first and then Kubernetes related tests and generate the results and make them ready to display in the dashboard.

MKIT Tool Usage And How It Works

After performing the inspection tests Docker image will bring up a webpage and exposed the service with port 8000. It will show you a complete list of Successful, Failed Tests and Affected Resources in a simple but effective dashboard.

Kubernetes Inspection MKIT interface

Outputs can be easily filtered with failed and successful tests.

Kubernetes Inspection Interface filter

Here is the sample of EKS Cluster tests and I created custom rule to check the correct are in place in my EKS cluster.

Kubernetes Inspection Test

This was the rule I used in the inspec profile for the above test

control 'eks-9' do
  impact 0.1
  title 'Tag compatibility'

  desc "Check the required tags are in place"
  desc "remediation", "Add correct and required tags"
  desc "validation", "verify the cluster tags again!"

  tag platform: "AWS"
  tag category: "Management and Governance"
  tag resource: "EKS"
  tag effort: 0.5

  ref "EKS Upgrades", url: "#"
  ref "EKS Versions", url: "#"

  describe "#{awsregion}/#{clustername}: tags" do
    subject { aws_eks_cluster(cluster_name: clustername, aws_region: awsregion)}
    its(:tags) { should include( "Environment" => "test", "Inspec" => "mkit", "Name" => "EKStest" ) }
  end
end

Here is another test for to check the EKS cluster status, checking whether it is in the “ACTIVE” state.

Kubernetes Inspection test 2

This is the inspec rule I used for the above test

control 'eks-7' do
  impact 0.7
  title 'Ensure the the EKS Cluster Status is ACTIVE'

  desc "Bitesize EKS cluster Status should be ACTIVE"
  desc "remediation", "Diagnose the reson behind this status"
  desc "validation", "verify the cluster status again!"

  tag platform: "AWS"
  tag category: "Management and Governance"
  tag resource: "EKS"
  tag effort: 0.5

  ref "EKS Upgrades", url: "https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html"
  ref "EKS Versions", url: "https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html"

  describe "#{awsregion}/#{clustername}: status" do
    subject { aws_eks_cluster(cluster_name: clustername, aws_region: awsregion)}
    its('status') { should eq 'ACTIVE' }
  end
end

Also, I tried some custom test to check the Kubernetes resources. Created a test to check the given Pod existence talking to the Kubernetes API and it was succeeded. My Pod Name was “my-pod”. Here is the output for the successful Kubernetes test.

Kubernetes Inspection kube test

Here is the inspec test for the above, actually I spent sometime with this as it was throwing some errors and took sometime for me to figure it out.

control "k8s-9" do
  impact 0.4

  title "Custom Rule To Check A POD Name with my-pod"

  desc "There should not be a POD with the name of my pod and after creating the pod this test should be passed - As Aruna"
  desc "remediation", "Create a POD with the name of my-pod - As Aruna"
  desc "validation", "Check your POD Names - As Aruna"

  tag platform: "K8S"
  tag category: "Test POD Validation Test"
  tag resource: "Pods"
  tag effort: 0.3

  ref "Follow for more info", url: "https://www.techcrumble.net"

  describe "my-pod: pods" do
    subject { k8sobject(api: 'v1', type: 'pods', namespace: 'default', name: 'my-pod') }
    its('name') { should eq 'my-pod' }
  end
end

If your tests are failing, which means it is not compliance with the tests it will give you a failed test status similar to below.

Kubernetes Inspection failed test
Kubernetes Inspection failed pod test

However, I got similar errors in my tests and gave me the errors in the interface something like this saying “Control Source Code Error”. For me it was hard to figure out what causing the error and where to start troubleshooting. The best way I could understand the error for the situation is login to a running container and executing the “cinc-audior” command for the test manually. It was giving me a proper error message and could start the troubleshooting.

Kubernetes Inspection code error

One thing I noticed, in my case it was not picking up the AWS Region for the tests and it was failing the tests, it caused in many tests such as checking the S3 bucket availability and checking the role availability, etc.

Kubernetes Inspection no region error

It was noted inside the container. I did also noticed that manual export of the default region would pass the test.

Kubernetes Inspection test succeeded

Also, did some troubleshooting and realized similar issues appeared in the Ruby codes and it is something to be fixed. I need to mention that I tried to set the environment variable during the container build and it was not succeeded either due to this error. So further fine tuning might be required. But, this is really great to have a proper visibility of the cluster environment.

Kudos to those of involved with the project and, I hope community contributions are always welcome!.

Click to rate this post!
[Total: 4 Average: 5]

Aruna Fernando

"Sharing knowledge doesn't put your job at risk - iron sharpen iron" I heard this and it's true.

Get Updates Directly To Your Inbox!

   

Leave a Comment

Loading Posts...