Posted in: homelab, Projects

Deploying Drone by Harness to k8s

Prologue:

I was happy, or so I thought. For what felt like the hundredth time I noticed a red circle next to the bell on my Jenkins server. Something was out of date enough to pose a security risk. Did I care? Well I was currently streaming, so I sort of guess I have to care right?

So I do the proper thing, I fire up the dev instance I have and validate what the update will look like. Should be straight forward. Modify the manifest to point to the new version of the container, hit send and wait to see what happens. Upgrade goes great, no REAL problems. I update all of the plugins and the dev instance seems to be humming along. Great. So lets do the same thing with the “prod” version.

I open the manifest for the prod instance, I change the version number, I hit send. That’s when I see it, the pod rebooted. Wait, why? it’s exactly the same as dev, in fact I’ve been cloning the files back and forth and only changing the namespace to validate that they are in fact the same. Well after a second reboot, the pod stays up. The logs show failures to access files on the volumes I use. Ok, that must be it, the old prod had a job running as the new prod was starting up, and it failed. no big deal.

This is where I shoot myself in the foot. “Hi, it’s me, I’m the problem it’s me.” I realized that I was working on the wrong branch of my manifests repo, so like a good little devop I commit my changes and switch to the correct one. I then try to run my job, nothing crazy, just building a container that would later be needed to build docker containers, the only real change was to add my root certificate to it to avoid future problems.

X509 error: Certificate not signed by recognized authority.

OK.. I exhale. I know what’s happened. I just need to point the manifest back at the root cert secret I have in the namespace. So I do. And push the deployment. Oops, I just rolled the Jenkins version back. Ok, no big deal, push again with the correct version. No problem.. oh wait, all of my plugin configurations are gone. Seems that Jenkins helpfully deleted anything it couldn’t read. Have I mentioned that I haven’t found a way to back up the kubernetes plugin configuration? Well I haven’t. If it exists, I’d love to know about it, drop a note here.

There is the why.. There should be a link at the top to skip the “growing up in the northeastern winters recipe” story and take you just below.

Process:

First things first. make the manifests. You can get them from my github if you like, or follow along from the stream. grab those and scan through em. I’ve pulled my server info out so you can add yours in.

  1. Decide your data storage method, I like postgres so that’s what is set by default in the manifests.
    The environment variables DRONE_DATABASE_DRIVER, DRONE_DATABASE_DATASOURCE, and DRONE_DATABASE_MAX_CONNECTIONS are what you use to decide what database you are to use. I tend not to deal with sqlite because I don’t like using local storage in my cluster, but you do you.
  2. Find the right set of variables for whatever database you choose from the reference
  3. You’re also going to choose your git backend here. the instructions vary depending on that. so I’ll just leave this here its the start of their install docs, you’re really just looking at which variables you need since it’s 100% configured that way, which is quite noice, good job Drone folks.
  4. Validate the image version is still good. I used the image from the install docs. drone/drone:2
    that may or may not still be valid.
  5. Runners – this is where I start to get hazy, and plan to revisit this. Near as I can tell, runners are just a thing that runs containers with the commands you tell them to. So if you want to package something with docker, you need a container with docker in it, and you tell it to build the repo using their pipeline files. example here https://docs.drone.io/pipeline/overview/

After another day of configuring, I’ve come to the conclusion that Drone will not work for my environment, OR I’m not willing to push further down the rabbit hole to make it work. For the most part I can get the runner to start new containers. Locating and setting more variables seemed to be the fix. What seemed at first to be a boon, turned into a bit of a headache in that I spent my time tracking down variables that would work in my environment. And round and round I went, circling the documentation, finding the next variable to get me to my next error message.

I will update the manifests on github with my final versions.

Back to Top