<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[C.T.]]></title><description><![CDATA[The raw thoughts of Chris Timberlake.]]></description><link>https://christophertimberlake.com/</link><generator>Ghost 5.24</generator><lastBuildDate>Wed, 29 Oct 2025 04:06:51 GMT</lastBuildDate><atom:link href="https://christophertimberlake.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Beware of "Normalization Of Deviance"]]></title><description><![CDATA[Normalization of Deviation is behind a great many accidents, mistakes, and it has robbed many of their lives. However, Deviation is also required, and without deviation, we also risk removing the tools necessary to prevent such things.]]></description><link>https://christophertimberlake.com/normalization-of-deviance/</link><guid isPermaLink="false">63f279463f8b91048be8284d</guid><category><![CDATA[Editorial]]></category><dc:creator><![CDATA[Chris Timberlake]]></dc:creator><pubDate>Sun, 19 Feb 2023 22:27:26 GMT</pubDate><content:encoded><![CDATA[<p>It&apos;s Sunday, which for me means it&apos;s a research day. Often, I&apos;ll research Aviation, Boating, or other Industrial accidents and failures. I find that when discussing Digital Transformations or processes, real world failures if not always, often, correlate to failures in technology.</p><p><a href="https://en.wikipedia.org/wiki/Normalization_of_deviance">Normalization of Deviance</a> is &quot;the process in which deviance from correct or proper behavior or rule becomes normalized in a government or corporate culture.&quot;</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://ewr1.vultrobjects.com/ct-blog-s3/2023/02/Spiral.jpg" class="kg-image" alt loading="lazy" width="1000" height="563"><figcaption>&quot;The Deviation Spiral&quot; - https://www.cirrustraining.com.au/normalisation-of-deviance-and-the-80-rule/</figcaption></figure><p>Over time, as you deviate from a process further and further, your safety margins shrink. Your actions become less resilient to failure, and more accident prone. As pictured above, a normalization of deviance, or a culture of deviation often results in a spiral. The deviation becomes the new normal, and further deviance continues.<br><br>At some point, that Deviance becomes codified further shrinking your failure, safety, and accident margins. As was the case with the <a href="https://www.congress.gov/bill/101st-congress/house-bill/5802">Aleutian Trade Act of 1990</a>; Where boats whose primary purpose it is engage in fishing commerce; were allowed to bypass inspections related to cargo for hire boats. Those fishing boats were then allowed to haul cargo. In the case of the <a href="https://en.wikipedia.org/wiki/Northern_Belle">FV Northern Belle</a>, this led to overloading of a fishing ship aswell as incorrect load and balance calculations. Those two items, combine with a poorly maintained ship, with rusty watertight bulkheads; resulted in it&apos;s rapid sinking, unfortunate death of it&apos;s captain, and complete hull-loss.</p><figure class="kg-card kg-image-card"><img src="https://ewr1.vultrobjects.com/ct-blog-s3/2023/02/2330684926_0978e8701f_b.jpg" class="kg-image" alt loading="lazy" width="1024" height="683"></figure><p>And yet - Deviation from normal process is in some cases necessary. How do we design systems and processes that allow for deviation when necessary, but do not fuel a culture of deviance? A key component of resilience is flexibility; How do we design systems of delivery that are safe and secure, while remaining resilient and flexible?<br><br></p><h3 id="deployment-of-uns-fpic-framework">Deployment of UN&apos;s FPIC Framework</h3><figure class="kg-card kg-image-card"><img src="https://ewr1.vultrobjects.com/ct-blog-s3/2023/02/Untitled_55.png" class="kg-image" alt loading="lazy" width="664" height="377"></figure><p>The United Nations has a policy for working work Indigineous Peoples. This framework is called <a href="https://www.fao.org/indigenous-peoples/our-pillars/fpic/en/">FPIC, or Free Prior Informed Consent</a>. The purpose of this is to respect the decision and input of Indigenous Communities. By implementing this framework, we can work to ensure that Indigenous Communities are able to make informed decisions, without fear or pressure of coercion. However, as a side-effect of this framework, we ensure that the decision being made is fully weighed and understood. That it&apos;s benefits, costs, detriments, weaknesses, and downsides are fully understood. That the outcome of said action is understood to the best of everyone&apos;s ability, and any unknown outcomes that occur, are in good faith unknown to all.<br><br><em>With this, any actions or agreements we undertake are performed with a full understanding of the potential outcomes. </em></p><p>Which is exactly what a deviantion of process requires to insulate and prevent against creating a cultue or normalization of deviation. More importantly, because a deviation is a risk event, having informed consent is required to guard against adverse outcomes. Every deviation should include FPIC, so that the deviation and it&apos;s outcomes are fully understood and weighed, every time that it occurs. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://ewr1.vultrobjects.com/ct-blog-s3/2023/02/Screenshot-2023-02-19-at-2.58.51-PM.png" class="kg-image" alt loading="lazy" width="784" height="267"><figcaption>Jenkins Skipped Pipeline - https://comquent.de/en/skipped-stages-in-jenkins-scripted-pipeline/</figcaption></figure><p><br>An example of this would be skipping security tests for a software deployment, because the security scanners are offline. A resilient process is flexible, and it may be necessary to ship software for legal, finance, or other reasons. However the inability to skip security scans is a blocking function, because it&apos;s critical to the process. So in order to deliver, we need to deviate.</p><p>In this scenario, we would involve the Security Team, Management Team, QA Team, and Development Teams. We would sit down and collect all the evidence we have of previous security scans, any changes that are made, and all information we have on hand at that moment.</p><p>Then we allow said teams to process this information, become informed, and provide their consent. With all consent, the security scan is skipped, but the remainder of the process remains in effect. <strong>Without all consent, the delivery is scrapped, without backlash or political fallout</strong>.</p><p>A key component of Informed Consent is the lack of pressure or force. Consent is automatically invalidated the moment coercion is introduced. If we look at many Aviation or Boating incidents, we can find that respecting the single voice of a single dissenter, it could have (and has!) resulted in the avoidance of diaster.</p><p></p><h3 id="rejection-of-process-and-setting-things-on-fire">Rejection of Process, and Setting things on fire</h3><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://ewr1.vultrobjects.com/ct-blog-s3/2023/02/73hjei.jpg" class="kg-image" alt loading="lazy" width="836" height="431"><figcaption>Bill from <a href="https://en.wikipedia.org/wiki/Gravity_Falls">Gravity Falls</a></figcaption></figure><p>Now, if you&apos;ve gotten this far, I know some of you like me are going &quot;But Chris, when do we set the entire thing on fire.&quot;<br><br>When you reach a point where requests for deviation become the normal. That is a key indicator your process is broken and not working. That it&apos;s no longer meeting it&apos;s current purpose and requires reconsideration. Without changing the process, you risk making deviations normal, and inadvertently create a culture where your folks accept deviation as normal. This is a unwanted outcome even if FPIC is utilized, every deviation from process increases your risk. Over time, small dangerous events, can result in a high risk level.<br><br>So, instead of burning the process down. You take it, study it, identify what works and does not work, finally you slowly iterate on it to improve it. Process exists for a reason; just because a process no longer benefits you, it does not mean that reason ceases to exist. So, respect your prior process and the lessons that process was born from.<br><br>Collect metrics on the deviations from your current process, categorize them, identify them, and understand why they occurred. Take those lessons and build them into your next process so that you have less deviation events. In our security scanner example above; Maybe we build a testing environment for our security tools, and maybe we only allow maintenance on the production tools on Mondays... Unless a deviation is required.</p><h3 id="in-closing">In Closing</h3><p>My goal of editorials on my blog are to get you thinking. To get you to wonder about things, ask questions, and challenge yourself. I aim to be as factual as possible, and I aim to never give you information that can not be actionable. <br><br>Normalization of Deviation is behind a great many accidents, mistakes, and it has robbed many of their lives. However, Deviation is also required, and without deviation, we also risk removing the tools necessary to prevent such things. I hope after reading this it has you wondering where deviation occurs in your process, and how you can improve it.</p>]]></content:encoded></item><item><title><![CDATA[Purpose Built Containers with GitLab CI + OpenShift]]></title><description><![CDATA[How do we build secure, fast, and small containers who serve a small purpose? What is the best way to make containers for used in CI/CD Processes?]]></description><link>https://christophertimberlake.com/purpose-built-containers/</link><guid isPermaLink="false">63897581bb655d2c4188970e</guid><category><![CDATA[GitLab]]></category><dc:creator><![CDATA[Chris Timberlake]]></dc:creator><pubDate>Thu, 10 Sep 2020 02:50:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This is an article I&apos;ve been meaning to write for a while. The purpose of this article is to cover the methodology and purpose behind &quot;Purpose Built Containers&quot;. This is a concept you&apos;ve likely seen expressed elsewhere and in other canontations. As good example of this is the <a href="12factor.net">12 Factor Application</a> or <a href="https://docs.openshift.com/container-platform/3.11/using_images/index.html">OpenShift&apos;s S2I</a>. To be clear, these are not new concepts when it comes to containers, but it&apos;s a methodology tied to GitLab and one that stands on the shoulders of giants.</p>
<p>A &quot;Purpose-Built Container&quot; is, if you&apos;ve ever seen Rick and Morty, esentially a Meeseeks Box of the IT World. That means it&apos;s a container that exist only to serve a specific purpose and then it quickly and quietly dies or is shut down. It is a container that is not designed to be long standing or host an application. It starts, it does it&apos;s purpose, then it stops and exits.</p>
<p><em>Note: When we refer to container, we mean the compiled image that is runable. When we refer to the image, we&apos;re refering to the dockerfile definition, prior to being built.</em></p>
<p><a href="https://gitlab.com/lackastack/article-repos/purpose-built-containers">Our Purpose Built Container</a><br>
<a href="https://gitlab.com/lackastack/article-repos/purpose-built-container-usage/">Using Our Purpose Built Container</a></p>
<p><strong><a href="https://gitlab.com/lackastack/article-repos/purpose-built-container-usage/-/jobs/732280518">An example of this process can be seen here</a></strong></p>
<h2 id="gitlab">GitLab</h2>
<p>Every CI/CD Job inside of a GitLab Pipeline observes this process. Every Job spins up, pulls &amp; starts a docker image, and then conducts a build job. Finally the container is stopped and shutdown. Every job spins up a new container from a docker image, and does the same process. No container that is created is ran twice; Even if a job is retried, a new container is spun up.</p>
<p>GitLab CI/CD can utilize any docker image, but not every docker image is built with this process in mind. For example, some docker images spin up an entire virtual machine, others will spin up a database alongside your application in the container. These are not purpose-built containers. They serve multiple purposes and their purposes change over time, Where-as a purpose built container serves one purpose and then dies.</p>
<h2 id="anatomy-of-a-purpose-built-container">Anatomy of a Purpose-Built Container</h2>
<p>A Purpose-Built Container is comprised of the following components:</p>
<ol>
<li>
<p><strong>They&apos;re Small:</strong> A purpose-built container should be comprised of only the minimum number of components needed to run. This may include a NodeJS Runtine for example. However, you&apos;d want to avoid including any application specific dependencies. You only want to include the NodeJS Runtime. After the container is spun up and running, then you inject your dependancies into it. There are exceptions to this rule, For example internal certificates may be included into the image. The rule is, if the image doesn&apos;t need it to run, it shouldn&apos;t be included.</p>
</li>
<li>
<p><strong>They&apos;re Multiuse:</strong> &quot;Purpose-built&quot; applies to how a container runs, and not how it&apos;s built. You can build a container for multiple purposes and uses aslong as it doesn&apos;t violate rule #1. You want your purpose-built containers to be built with multiple consumers and usages in mind. Not only should this reduce the specific configurations you include in it, it should keep the image small in size. Therefore following Rule #1 and giving it speed. I.e. Maven and NodeJS should be separate containers, but each can include global requirements like internal certificates.</p>
</li>
<li>
<p><strong>Their entrypoint is empty:</strong> A &quot;Purpose-Built&quot; container is a zombie, it has no purpose other than what is given to it. Because of this, it should contain a dummy entrypoint or empty entrypoint. A perfect example of this is an entrypoint with a shell-script that echos the instructions on how to use the container and not run it. &quot;Purpose-Built&quot; containers are not designed to be long standing or ran on a customer facing/production environment.</p>
</li>
<li>
<p><strong>Avoid needless layers:</strong> Every command inside of a Dockerfile produces a new &quot;layer&quot; inside of the Docker Image, which makes the Image larger and also can make it more difficult to pull down. Both of these things result in a slower image to pull down, start, run, and destroy. You should reduce those as much as possible, opting for <code>RUN command &amp;&amp; command</code> as opposed to <code>RUN command \n RUN command</code> - The first will produce one layer, the second two layers.</p>
</li>
<li>
<p><strong>Run with least-amount of privilege:&quot;</strong> Most purpose-built containers will not need any form of elevated permissions. They need the bare minimum of user permissions to function. With these you should have them run with <code>USER 1001</code> so that they run with a random user. On OpenShift this is a requirement, on Docker/Kubernetes this is a great practice. On OpenShift, all random user&apos;s run under the ROOT user group, but without ROOT permissions. So just set any files or folders to being owned by the root group and they&apos;ll be usable by this root user.</p>
</li>
</ol>
<h2 id="how-do-we-build-the-image">How do we build the image?</h2>
<p>To build the image, we&apos;re going to start with a basics folder structure. Go ahead and make a new GitLab Project, then initialize it with a readme and clone it down to your local machine. The folder structure for this should follow the following.</p>
<ul>
<li>Dockerfile</li>
<li>.gitlab-ci.yaml</li>
</ul>
<h4 id="1-for-our-dockerfile-were-going-to-do-the-following">1. For our Dockerfile we&apos;re going to do the following..</h4>
<p>Remember, we want out Dockerfile to be as light and generic as possible. Notice on the RUN line how I&apos;m merging two commands into one to reduce layers. You will also note that I have added <code>USER 1001</code> to the bottom of the Dockerfile, this is to ensure that the container runs as a random user without any permissions. While some workloads may require more, for this purpose, we don&apos;t.</p>
<p>At the end of the RUN line, we have a chmod command. This command is used to give the <code>/.npm</code> folder root group permissions. The purpose of this is that any random user that spins up has the root group, but not root user privilege.</p>
<pre><code class="language-bash">FROM alpine:3.12.0
RUN apk update &amp;&amp; apk add --no-cache nodejs npm &amp;&amp; mkdir /.npm &amp;&amp; chmod -R g=u /.npm

USER 1001

CMD [&quot;echo&quot;, &quot;This is a &apos;Purpose-Built Container&apos;, It is not meant to be ran this way. Please visit www.lackastack.net to see how to use it!&quot;]
</code></pre>
<h4 id="2-our-gitlab-ci-file-to-automating-create-and-maintain-it">2. Our GitLab CI File to automating create and maintain it.</h4>
<p>This GitLab CI file will build our image and save it in the local GitLab Container Registry of this project. It will save it as latest when built. When git is tagged, it will release an image with that GitLab CI Tag as the Docker Image tag. This will allow you to properly version your Docker Images for release and maintenance.</p>
<pre><code class="language-yaml">docker-build-master:
  image: docker:latest
  stage: build
  services:
    - docker:dind
  before_script:
    - docker login -u &quot;$CI_REGISTRY_USER&quot; -p &quot;$CI_REGISTRY_PASSWORD&quot; $CI_REGISTRY
  script:
    - docker build --pull -t &quot;$CI_REGISTRY_IMAGE&quot; .
    - docker push &quot;$CI_REGISTRY_IMAGE&quot;
  only:
    - master

docker-build-release:
  image: docker:latest
  stage: build
  when: manual
  services:
    - docker:dind
  before_script:
    - docker login -u &quot;$CI_REGISTRY_USER&quot; -p &quot;$CI_REGISTRY_PASSWORD&quot; $CI_REGISTRY
  script:
    - docker build --pull -t &quot;$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG&quot; .
    - docker push &quot;$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG&quot;
  only:
    - tags
</code></pre>
<h2 id="how-do-we-use-the-image">How do we use the image?</h2>
<p>If you&apos;ve gotten here, we have an image. It&apos;s light-weight, it follows the rules, and it has a NodeJS Runtime. So, now below we have a GitLab CI Pipeline. This pipeline will utilize our image. You&apos;ll also notice that this has a <code>before_script</code> block with an array of parameters. We&apos;re using only one parameter, to install <code>expressjs</code>. Now, Why are we installing this here?</p>
<p>Above, we spoke about the fact our image needs to be multiuse. This means we can&apos;t bake things like expressjs into our NodeJS image. So we use <code>before_script</code> to run the commands to install it. This is a careful dance, if you have a couple dependancies, this wont take long. But if you have a large number of dependancies that many people use. You should bake them into the docker image and allow them to be reused.</p>
<p>The goal here is about speed, and less waste. If you have a bunch of unique images where everyone makes one, you have a bunch of waste. But if you have a small number of images with everything but the kitchen sink, you end up with very slow pipelines. There&apos;s a careful and delicate balance to be had.</p>
<pre><code class="language-yaml">build:
  stage: build
  image: registry.gitlab.com/lackastack/article-repos/purpose-built-containers:latest
  before_script:
    - npm install expressjs
  script:
    - echo &quot;Installed&quot;
</code></pre>
<h2 id="conclusion">Conclusion</h2>
<p>I hope you got some new insight out of this. The goal here again, is to build a container for use inside of GitLab CI in addition to being used inside of OpenShift. Many public containers are not usable inside of OpenShift due to security rules and thus you may be forced to make your own. If you follow this process you images should be quick and effective on GitLab but also run properly on OpenShift.</p>
<p>May your containers be small, and your CI Pipelines be fast.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Deploying a GitLab Runner in OpenShift Unprivileged]]></title><description><![CDATA[By default GitLab & it's Runners require anyuid and root access to work. They also require Helm charts to install. The purpose of this guide is to walk you through how to build your own GitLab Runner container that does not use Root, AnyUID or Helm Charts.]]></description><link>https://christophertimberlake.com/deploying-a-gitlab-runner-in-openshift-unprivileged/</link><guid isPermaLink="false">63897413bb655d2c418896e4</guid><category><![CDATA[GitLab]]></category><dc:creator><![CDATA[Chris Timberlake]]></dc:creator><pubDate>Thu, 21 Nov 2019 05:00:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x2757;</div><div class="kg-callout-text">Since writing this article, GitLab has launched OpenShift support via the GitLab OpenShift Operator. <a href="https://docs.gitlab.com/runner/install/operator.html">You can read about this in the Official GitLab Documentation (Click Here).</a></div></div><!--kg-card-begin: markdown--><p>By default GitLab &amp; it&apos;s Runners require anyuid and root access to work. They also require Helm charts to install. If you&apos;re an OpenShift Administrator, these two things are likely not compatible with your way of doing things. You likely also dont want GitLab Runners to run CI Jobs using AnyUID and Root access. The purpose of this guide is to walk you through how to build your own GitLab Runner container that does not use Root or AnyUID. So that it&apos;s secure and also communicated properly with GitLab.</p>
<h2 id="requirements">Requirements</h2>
<ul>
<li>One OpenShift Cluster (With Storage)</li>
<li>One GitLab Installation</li>
<li>Cluster Admin access to OpenShift Cluster</li>
</ul>
<p>The very first thing you&apos;re going to want to do is clone the repo with our OpenShift Resources. Then we&apos;ll go through them.</p>
<pre><code class="language-bash">cd &lt;Your_Projects_Directory&gt;
git clone https://gitlab.com/lackastack/article-repos/openshift-gitlab-runner.git
</code></pre>
<h2 id="how-the-runner-is-assembled">How the runner is assembled.</h2>
<p>I&apos;ve gone through the process of setting up OpenShift Templates to make this process easier. Without it, this guide may be super long. I encourage you to read through the templates and understand how they work and what they do. I&apos;m also not going to deeply dive into the dockerfile, but call out some specific things.</p>
<p>Go ahead and open the Dockerfile. This is the file we&apos;re going to be using for building our container. From lines 6 - 27, You&apos;ll see that we&apos;re inserting the GitLab Certificates for YUM Repos. Then we&apos;re going through on line 29-31 and installing the gitlab-runner. Everytime you build this container, it will install the newest gitlab runner into the container.</p>
<p>But it&apos;s lines 34-40 I want to point out.</p>
<pre><code class="language-bash">RUN mkdir -p /etc/gitlab-runner/certs &amp;&amp; \
    mkdir /.gitlab-runner &amp;&amp; \
    mkdir /tmp/gitlab-home &amp;&amp; \
    chmod -R g=u /.gitlab-runner &amp;&amp; \
    chmod -R g=u /etc/gitlab-runner &amp;&amp; \
    chmod -R g=u /tmp/gitlab-home &amp;&amp; \
    chmod +x /usr/bin/dumb-init
</code></pre>
<p>These lines do the magic. When the gitlab-runner installs via yum, it creates folders itself with permissions for root and the gitlab user. Because OpenShift runs as random uid, we can&apos;t/shouldn&apos;t use these. So we&apos;re going ahead and making new folders and also reassigning those folders to the root group. You see, when OpenShift assigns a random uid to a container, that random uid is always in the root group. So because we use <code>chmod -r g=u</code> this command removes the user permissions to those folders and assigns them to a group.</p>
<p>Now, let&apos;s go ahead and open the entrypoint file. Scroll down to lines 19-31, Ignore that certificate stuff.</p>
<pre><code class="language-bash">if [ ! -f &quot;/.gitlab-runner/config.toml&quot; ]; then
  gitlab-runner register --non-interactive \
                        --template-config /scripts/config.toml \
                        --url $GITLAB_URL \
                        --executor kubernetes \
                        --locked=false \
                        --access-level=&quot;not_protected&quot; \
                        --docker-privileged=false \
                        --kubernetes-privileged=false 
fi
# Start the runner
gitlab-runner run --working-directory=/tmp/gitlab-home --listen-address=0.0.0.0
</code></pre>
<p>Everytime the container starts/restarts, etc will run this script. We start by checking to see if the config.toml is in one of the directories we created. If it&apos;s not, then we need to register the runner. So we start that process, we register the runner with a one-line registration method. The secret sauce is here <code>--template-config /scripts/config.toml</code>. Later in our openshift templates, we&apos;ll create a configmap with our basic config.toml, and then input our default values into it. The GitLab Runner will take this file, use it as a template and fill out the rest for us.</p>
<pre><code class="language-bash">--executor kubernetes
</code></pre>
<p>This is so the runner operates as if it was in Kubernetes, and it sort-of is.</p>
<pre><code class="language-bash">--access-level=&quot;not_protected&quot; \
--docker-privileged=false \
--kubernetes-privileged=false
</code></pre>
<p>These lines tell the runner to run with the fewest permissions as possible. This prevents us from doing Docker in Docker - But because you&apos;re on OpenShift we don&apos;t need that.. You&apos;ll see why in an upcoming <em>GitLabOps on OpenShift</em> tutorial.</p>
<h2 id="importing-and-building-the-runner">Importing and Building the Runner.</h2>
<p>This is where you&apos;re Cluster Admin access will come in handy! Before we do this, Please take a look at the <code>build-template.yml</code> file. This is what we&apos;ll be using and importing. You shouldn&apos;t import anything into your cluster without looking into it and what it does. Without further-a-do lets import the template, run the following.</p>
<pre><code class="language-bash">oc login &lt;stuff&gt;
oc project openshift
oc process -f ./build-template.yml | oc apply -f-
</code></pre>
<p>This will import the build object and image stream into your OpenShift cluster in the OpenShift Namespace. Why there? Because we want this image and template to be accessible by everyone, but not editable by anyone but a cluster admin. Now that this is done, let&apos;s go ahead and start the image build.</p>
<pre><code class="language-bash">cd &lt;Your_GitCloned_Directory&gt;
oc start-build gitlab-runner --from-dir=. --follow
</code></pre>
<p>At this point, the build is going to start and stream it&apos;s output to the terminal. Let it run it can take up to 5 minutes. When it&apos;s done you should see a success or pushing image message. This means the image is in the repo and ready to be used.</p>
<h2 id="the-gitlab-runner-template">The GitLab Runner Template.</h2>
<p>This is the template that we want to import for all users of GitLab to provision and deploy their own runners. The idea is that users of OpenShift and GitLab can have a namespace on OpenShift and a project on GitLab, Then install a runner on OpenShift in their namespace pointed to their GitLab project. Then their runners don&apos;t affect anyone elses namespaces, you can monitor metrics per namespace, and limit resources on the namespace also. Plus, it&apos;s self-service, so win-win.</p>
<p>The template included is big, Please read it. However, i&apos;m not going to quote it here because it&apos;s huge. This template when installed in the OpenShift namespace adds the GitLab-Runner to the self-service portal for namespaces. When executed this template installs:</p>
<ul>
<li>2 Persistent Volume Claims</li>
<li>1 Deployment Config</li>
<li>1 ConfigMap</li>
<li>2 RoleBindings</li>
<li>1 Service Account</li>
</ul>
<p>All of these elements are required to power the runner. Let&apos;s go ahead and install it now.</p>
<pre><code class="language-bash">oc login &lt;stuff&gt;
oc project openshift
oc apply -f ./runner-template.yml -n openshift
</code></pre>
<p>If this succeeds in a few moments users will be able to see the GitLab-Runner object in their self-service portal.</p>
<h2 id="the-final-touches-deploying-the-runner">The Final Touches, Deploying the Runner.</h2>
<!--kg-card-end: markdown--><figure class="kg-card kg-gallery-card kg-width-wide"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://christophertimberlake.com/content/images/2022/12/unprivileged-glrunner-openshift-1.png" width="1744" height="1302" loading="lazy" alt srcset="https://christophertimberlake.com/content/images/size/w600/2022/12/unprivileged-glrunner-openshift-1.png 600w, https://christophertimberlake.com/content/images/size/w1000/2022/12/unprivileged-glrunner-openshift-1.png 1000w, https://christophertimberlake.com/content/images/size/w1600/2022/12/unprivileged-glrunner-openshift-1.png 1600w, https://christophertimberlake.com/content/images/2022/12/unprivileged-glrunner-openshift-1.png 1744w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://christophertimberlake.com/content/images/2022/12/unprivileged-glrunner-openshift-2.png" width="1794" height="1332" loading="lazy" alt srcset="https://christophertimberlake.com/content/images/size/w600/2022/12/unprivileged-glrunner-openshift-2.png 600w, https://christophertimberlake.com/content/images/size/w1000/2022/12/unprivileged-glrunner-openshift-2.png 1000w, https://christophertimberlake.com/content/images/size/w1600/2022/12/unprivileged-glrunner-openshift-2.png 1600w, https://christophertimberlake.com/content/images/2022/12/unprivileged-glrunner-openshift-2.png 1794w" sizes="(min-width: 720px) 720px"></div></div></div></figure>]]></content:encoded></item><item><title><![CDATA[Running Jenkins Files inside GitLab CI]]></title><description><![CDATA[Learn how to lift and shift your Jenkins jobs over to GitLab CI while you migrate.]]></description><link>https://christophertimberlake.com/running-jenkins-files-inside-gitlab-ci/</link><guid isPermaLink="false">638972e4bb655d2c418896b2</guid><category><![CDATA[GitLab]]></category><dc:creator><![CDATA[Chris Timberlake]]></dc:creator><pubDate>Thu, 14 Nov 2019 05:00:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="running-jenkins-files-inside-gitlab-ci">Running Jenkins Files inside GitLab CI</h2>
<p>First, I want to set some ground rules for this. For starters, this process is not meant for long term use. There are many downsides to this - Such as it only runs in one GitLab Stage and isn&apos;t asyncronous. <em>However</em> this process can be used to run your Jenkins builds in GitLab CI, While you&apos;re migrating your Jenkinsfile to GitLab CI Syntax. Make no mistake - This doesn&apos;t solve your migration woes, But it does allow you to run your Jenkinsfile inside GitLab for the time being. It&apos;s a stop-gap measure.</p>
<h2 id="step-1-setting-up-jenkins-locally">Step 1: Setting up Jenkins Locally.</h2>
<p>This process requires files from Jenkins. In order to do that we need to spin up a Jenkins instance locally, configure it, then extract the files. You can do that using the commands below. <em>Note: The Jenkins version here is hard-set for a reason. The Jenkins version has to match the Jenkins Version listed in the JenkinsFile-Runner.jar file below.</em></p>
<pre><code class="language-bash">docker run -d -p 49001:8080 jenkins/jenkins:2.176.2
fac214d16d9f302cbdcc7d950314cfb9e9da8b06efd7c8104c52034c82328a6a
docker exec -it ${THE_DOCKERID_RETURNED_ABOVE} /bin/bash
</code></pre>
<p>From this point, You should have a terminal that can access the bash prompt on Jenkins. In a few moments you can access <code>http://localhost:49001</code> and be able to see the Jenkins initial setup. You should see something like below. We&apos;re going to need to use that Jenkins bash prompt to get the password.</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://christophertimberlake.com/content/images/2022/12/gitlabci-jenkinsfile-1.png" class="kg-image" alt loading="lazy" width="1944" height="832" srcset="https://christophertimberlake.com/content/images/size/w600/2022/12/gitlabci-jenkinsfile-1.png 600w, https://christophertimberlake.com/content/images/size/w1000/2022/12/gitlabci-jenkinsfile-1.png 1000w, https://christophertimberlake.com/content/images/size/w1600/2022/12/gitlabci-jenkinsfile-1.png 1600w, https://christophertimberlake.com/content/images/2022/12/gitlabci-jenkinsfile-1.png 1944w" sizes="(min-width: 720px) 720px"></figure><!--kg-card-begin: markdown--><pre><code class="language-bash">jenkins@b4d2a5891b7f:/$ cat /var/jenkins_home/secrets/initialAdminPassword
c10985e17f7f44038fff0fa776e343e4
</code></pre>
<p>Next, we&apos;re going to be prompted with which plugins we want to install. Go ahead and pick your option. I chose just the typical plugins, But if you have unique needs, install those. We&apos;ll need them later. Once this is done you should see a screen with the plugins installing.</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://christophertimberlake.com/content/images/2022/12/gitlabci-jenkinsfile-2.png" class="kg-image" alt loading="lazy" width="1956" height="994" srcset="https://christophertimberlake.com/content/images/size/w600/2022/12/gitlabci-jenkinsfile-2.png 600w, https://christophertimberlake.com/content/images/size/w1000/2022/12/gitlabci-jenkinsfile-2.png 1000w, https://christophertimberlake.com/content/images/size/w1600/2022/12/gitlabci-jenkinsfile-2.png 1600w, https://christophertimberlake.com/content/images/2022/12/gitlabci-jenkinsfile-2.png 1956w" sizes="(min-width: 720px) 720px"></figure><!--kg-card-begin: markdown--><p>After this you&apos;ll be prompted to make a user account, please do so. Also select a password thats not used elsewhere, as we may be using it in plain text elsewhere.</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://christophertimberlake.com/content/images/2022/12/gitlabci-jenkinsfile-3.png" class="kg-image" alt loading="lazy" width="1960" height="872" srcset="https://christophertimberlake.com/content/images/size/w600/2022/12/gitlabci-jenkinsfile-3.png 600w, https://christophertimberlake.com/content/images/size/w1000/2022/12/gitlabci-jenkinsfile-3.png 1000w, https://christophertimberlake.com/content/images/size/w1600/2022/12/gitlabci-jenkinsfile-3.png 1600w, https://christophertimberlake.com/content/images/2022/12/gitlabci-jenkinsfile-3.png 1960w" sizes="(min-width: 720px) 720px"></figure><!--kg-card-begin: markdown--><p>Couple clicks here, couple taps there, and we&apos;re done!</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://christophertimberlake.com/content/images/2022/12/gitlabci-jenkinsfile-4.png" class="kg-image" alt loading="lazy" width="1950" height="806" srcset="https://christophertimberlake.com/content/images/size/w600/2022/12/gitlabci-jenkinsfile-4.png 600w, https://christophertimberlake.com/content/images/size/w1000/2022/12/gitlabci-jenkinsfile-4.png 1000w, https://christophertimberlake.com/content/images/size/w1600/2022/12/gitlabci-jenkinsfile-4.png 1600w, https://christophertimberlake.com/content/images/2022/12/gitlabci-jenkinsfile-4.png 1950w" sizes="(min-width: 720px) 720px"></figure><!--kg-card-begin: markdown--><h2 id="step-2-extracting-the-files-from-jenkins">Step 2: Extracting the Files from Jenkins</h2>
<p>Now that we have a running Jenkins instance, we need to extract all of the files from Jenkins. We&apos;re going to exfiltrate the entire Jenkins Home directory from the docker container. So go ahead and spin up another bash prompt. You&apos;re going to want to make a local directory for this to save the files. You&apos;re going to need your containers ID for this. If you dont have it, use <code>docker ps</code></p>
<pre><code class="language-bash">mkdir -p ~/Projects/jenkins_home
docker cp ${DOCKER_CONTAINER_ID}:/var/jenkins_home ~/Projects/jenkins_home
docker cp ${DOCKER_CONTAINER_ID}:/usr/share/jenkins/jenkins.war ~/Projects/jenkins.war
</code></pre>
<h2 id="step-3-making-the-jenkinsfile-runner-bin-file">Step 3: Making the JenkinsFile-Runner Bin File.</h2>
<p>We&apos;re going to make a new file, Call it <code>jenkinsfile-runner</code>. Inside this file we&apos;re going to put the following. This is used to execute Jenkinsfile-Runner.</p>
<pre><code class="language-bash">java -jar /app/bin/jenkinsfile-runner.jar ${@}
</code></pre>
<h2 id="step-4-creating-the-gitlab-ci-container">Step 4: Creating the GitLab CI Container.</h2>
<p>So, I trust/assume you know how to make a GitLab CI Project. Go ahead and make one, name it whatever you want. For this project, i&apos;ve named it <a href="https://gitlab.com/lackastack/article-repos/gitlab-jenkinsfile-runner/new/master">gitlab-jenkinsfile-runner</a>. When you create this repo, Make sure you initialize it with a README, Now clone it, we&apos;re going to do some things to it.</p>
<pre><code class="language-bash">git clone ${YOUR_REPO_URL}
cd ${YOUR_REPO_DIR}
# Now move the Jenkins Home
mv ~/Projects/jenkins_home/ ./
# Now save the Jenkins Home
git add jenkins_home
</code></pre>
<p>Now that we have Jenkins Home setup to be in our Git Repo, We should make the .gitlab-ci.yml file to make the runner image and push it. In the interest of time, I&apos;ve just copy/pasted the .gitlab-ci.yml file I&apos;ve used below. It basically builds a docker image and uploads it to the local GitLab Registry.</p>
<pre><code class="language-bash">docker-build:
  image: docker:latest
  stage: build
  services:
    - docker:dind
  before_script:
    - docker login -u &quot;$CI_REGISTRY_USER&quot; -p &quot;$CI_REGISTRY_PASSWORD&quot; $CI_REGISTRY
  script:
    - docker build --pull -t &quot;$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG&quot; .
    - docker push &quot;$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG&quot;
  except:
    - master
</code></pre>
<p>Now add this file using git, so that it&apos;s tracked. <code>git add .gitlab-ci.yml</code> From here we need to go about the longer process of making the dockerfile. I&apos;m going to break up the dockerfile into chunks and explain it step by step. <strong>It&apos;s important that this is done right. If the jenkins version or plugins don&apos;t match/conflict you&apos;re going to have a bad time.</strong> So go ahead and create a <code>Dockerfile</code>.</p>
<p>The from should be from a JRE. All of Jenkins is using Java, so we need Java.</p>
<pre><code class="language-bash">FROM openjdk:11-jdk
</code></pre>
<p>With these commands we&apos;re making the folder structure. We need these folders to put data into. The last step - We&apos;re downloading the jenkinsfile-runner.jar file. I have pegged the version of both this file and the Jenkins docker container above. The JenkinsFile-Runner has a pom dependancy on Jenkins. If they&apos;re out of sync. Bad things occur.</p>
<pre><code class="language-bash">RUN mkdir -p /app/bin &amp;&amp; \
    mkdir -p /app/jenkins_home &amp;&amp; \
    mkdir -p /app/jenkins &amp;&amp; \
    wget https://repo.jenkins-ci.org/releases/io/jenkins/jenkinsfile-runner/jenkinsfile-runner/1.0-beta-10/jenkinsfile-runner-1.0-beta-10.jar -O /app/bin/jenkinsfile-runner.jar
</code></pre>
<p>With this, We&apos;re moving all of our files into the container in their proper places.</p>
<pre><code class="language-bash">COPY jenkins_home/ /app/jenkins_home
COPY jenkins.war /app/bin/jenkins.war
COPY jenkinsfile-runner /app/bin/jenkinsfile-runner
COPY Test-Jenkinsfile /tmp/Jenkinsfile
</code></pre>
<p>We&apos;re setting the Home directory for Jenkins so Jenkins will not try to reinitalize itself.</p>
<pre><code class="language-bash">ENV JENKINS_HOME /app/jenkins_home
</code></pre>
<p>Ok! Couple things going on here. To start with, we&apos;re unzipping the jenkins war so it can be used. We&apos;re moving and setting permissions of the jenkinsfile-runner execution script, and we&apos;re setting /bin/bash as the command. GitLab will override the entry point.</p>
<pre><code class="language-bash">RUN unzip /app/bin/jenkins.war -d /app/jenkins &amp;&amp; \
    ln -s /app/bin/jenkinsfile-runner /usr/bin/jenkinsfile-runner &amp;&amp; \
    chmod +X /usr/bin/jenkinsfile-runner &amp;&amp; \
    chmod 777 /usr/bin/jenkinsfile-runner
CMD [&quot;/bin/bash&quot;]
</code></pre>
<p>Now, From here, you need to save <code>git add *</code> all of your files. Then you need to push them up <code>git commit -am message &amp;&amp; git push</code>.</p>
<h2 id="step-4-how-do-we-use-this-monstrosity">Step 4: How do we use this monstrosity?</h2>
<p>Go ahead and make a new repo, This one will be the one we put your Jenkinsfile application into. We&apos;re going to create a simple <code>.gitlab-ci.yml</code> file to do the building. Below you&apos;ll see mine. In the script block, it calls the runner and specifies our Jenkins instance and our Plugins directory. Finally it accepts a Jenkinsfile. From here it executes the Jenkins file.</p>
<pre><code class="language-yaml">stages:
    - build-jenkins

&quot;JenkinsFile Build&quot;:
    stage: build-jenkins
    image: registry.gitlab.com/lackastack/article-repos/gitlab-jenkinsfile-runner:master
    script:
      - jenkinsfile-runner -w /app/jenkins -p /app/jenkins_home/plugins/ -f ./Jenkinsfile

</code></pre>
<p>Now, go ahead and add a Jenkinsfile. The simplier the better...</p>
<pre><code>pipeline {
    agent any
    parameters {
        string(name: &apos;param1&apos;, defaultValue: &apos;&apos;, description: &apos;GitLab&apos;)
        string(name: &apos;param2&apos;, defaultValue: &apos;&apos;, description: &apos;Really Whips The Llamas Waterbowl.&apos;)
    }
    stages {
        stage(&apos;Build&apos;) {
            steps {
                echo &apos;Hello Field!&apos;
                echo &quot;message: ${params.param1}&quot;
                echo &quot;param2: ${params.param2}&quot;
                sh &apos;ls -la&apos;
            }
        }
    }
}

</code></pre>
<h2 id="step-5-conclusion">Step 5: Conclusion.</h2>
<p>Let&apos;s talk about this a bit. The way this works is that it starts a Docker container with a Jenkins instance inside it and uses the JenkinsFile-Runner project to parasite Jenkins and run our file. In an ideal production environment, You would take your Jenkins home directory and War files and put them into this container to run your Jenkinsfiles from within GitLab. The catch with this is versions. Jenkinsfile-Runner has a dependancy on Jenkins. So the versions have to match otherwise the plugins fail because they see the Jenkins version that JenkinsFile-Runner is built against.</p>
<p>Another thing not covered is credentials. With this method - You have to modify the credentials files in the Jenkins Home Directory to store credentials. Once you store them there, you can use them in your Jenkinsfile. You can even edit those files as part of the GitLab CI Job.</p>
<p>Lastly is Plugins. Many Jenkins jobs have numerous plugins in their pipeline. But Jenkins jobs also have plugins that do things outside of their pipeline. Currently, there is no solution for those as they&apos;re not in the Jenkinsfile. However, if you move those commands to your Jenkinsfile, You should be good to go.</p>
<p>This isn&apos;t a bulletproof solution, it&apos;s close. But you&apos;ll need to manage it and customize it. It&apos;s entirely possible you just copy your existing Jenkins War and Home directory into the container, and run it there. That should work. But if not, you&apos;ll need to make changes to Jenkins in the container to make it work 100%.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Multiple Approvers in GitLab CI for Change Management]]></title><description><![CDATA[Learn how to add multiple approvers to a GitLab CI Pipeline.]]></description><link>https://christophertimberlake.com/multiple-approvers-in-gitlab-ci-for-change-management/</link><guid isPermaLink="false">638971d6bb655d2c4188968c</guid><category><![CDATA[GitLab]]></category><dc:creator><![CDATA[Chris Timberlake]]></dc:creator><pubDate>Sun, 10 Nov 2019 05:00:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In a previous article we wrote about how to use an external bot to obtain approvals. This method is one take on that problem. It has it&apos;s advantages, such as having the bot being able to be tied into third party services. It can handle logging, it can collect information at approval time to pass to the pipeline. There&apos;s alot of flexibility there. The downside being that it&apos;s an external bot, and everything that comes with that.</p>
<p>Today, we&apos;re going to show how to do Multiple Approvals in a GitLab CI pipeline, with just GitLab and it&apos;s tools. We&apos;re going to be using Protected Environments, Manual CI Jobs, and Approvers. The way this works is we have multiple stages in a CI Pipeline that require approvers for each. Without an approval - that stage does not progress.</p>
<h2 id="make-an-environment">Make an Environment.</h2>
<p>The first step in this is to go to <em><strong>Operations -&gt; Environments</strong></em> and create a new environment. I named mine <em>ctimberlake-approval</em> with my username being first, then approval as second. This makes it easy to know what it&apos;s for at a glance. Leave the URL Blank. You&apos;ll want to do this for every person who needs to approve the job.</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://christophertimberlake.com/content/images/2022/12/multiple-approver-changemangement-1.png" class="kg-image" alt loading="lazy" width="1418" height="786" srcset="https://christophertimberlake.com/content/images/size/w600/2022/12/multiple-approver-changemangement-1.png 600w, https://christophertimberlake.com/content/images/size/w1000/2022/12/multiple-approver-changemangement-1.png 1000w, https://christophertimberlake.com/content/images/2022/12/multiple-approver-changemangement-1.png 1418w" sizes="(min-width: 720px) 720px"></figure><!--kg-card-begin: markdown--><p>Once that&apos;s done, and you have the environments. You need to protect them. From here you go to <em><strong>Settings -&gt; CI/CD -&gt; Protected Environments</strong></em>. You click the environment of the user, then select the user.</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://christophertimberlake.com/content/images/2022/12/multiple-approver-changemangement-2.png" class="kg-image" alt loading="lazy" width="1728" height="690" srcset="https://christophertimberlake.com/content/images/size/w600/2022/12/multiple-approver-changemangement-2.png 600w, https://christophertimberlake.com/content/images/size/w1000/2022/12/multiple-approver-changemangement-2.png 1000w, https://christophertimberlake.com/content/images/size/w1600/2022/12/multiple-approver-changemangement-2.png 1600w, https://christophertimberlake.com/content/images/2022/12/multiple-approver-changemangement-2.png 1728w" sizes="(min-width: 720px) 720px"></figure><!--kg-card-begin: markdown--><p>After this is done - You&apos;re process should look like this. If it does, The setup is done for environments.</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://christophertimberlake.com/content/images/2022/12/multiple-approver-changemangement-3.png" class="kg-image" alt loading="lazy" width="1780" height="360" srcset="https://christophertimberlake.com/content/images/size/w600/2022/12/multiple-approver-changemangement-3.png 600w, https://christophertimberlake.com/content/images/size/w1000/2022/12/multiple-approver-changemangement-3.png 1000w, https://christophertimberlake.com/content/images/size/w1600/2022/12/multiple-approver-changemangement-3.png 1600w, https://christophertimberlake.com/content/images/2022/12/multiple-approver-changemangement-3.png 1780w" sizes="(min-width: 720px) 720px"></figure><!--kg-card-begin: markdown--><p>Now that we have the setup done, we need to set this up to work properly. Below is our pipeline, I&apos;m going to explain it simply. First, We define the stages. We have a build stage, This could be anything you want. But then we have an approvals stage. Everything before this approvals stage will run without delay. However, then it gets to approvals. The pipeline stops solid. The final step is the release step - Which only fires if we have approvals.</p>
<p>The magic here works because we&apos;ve specified the approval job with (3) three things.</p>
<ol>
<li><code>when: manual</code> - This will prevent the pipeline from proceeding without manual intervention.</li>
<li><code>allow_failure: false</code> - This will prevent the release stage from running by itself. Without it the pipeline wont stop at the approvals.</li>
<li><code>environment: name: ctimberlake-approval</code> - Because this environment is protected, The only person who can trigger this job are those that have access to the environment.</li>
</ol>
<pre><code class="language-yaml">stages:
  - build
  - approvals
  - release

build-1:
  stage: build
  script:
    - echo &quot;We&apos;re building here&quot;

ctimberlake-approval:
  stage: approvals
  when: manual
  allow_failure: false
  environment:
    name: ctimberlake-approval
  script:
    - echo &quot;We&apos;re just doing a simple echo statement here. But a curl request could be made to mark the approval to a cental compliance tool.&quot;

versable-approval:
  stage: approvals
  when: manual
  allow_failure: false
  environment:
    name: versable-approval
  script:
    - echo &quot;We&apos;re just doing a simple echo statement here. But a curl request could be made to mark the approval to a cental compliance tool.&quot;

release-1:
  stage: release
  script:
    - echo &quot;We&apos;re releasing stuff here&quot;
</code></pre>
<p>If you did everything right, You&apos;ll be met with a pipeline like below. Where it shows that I can approve one job (and I did). However, the release stage has not triggered yet because Michael hasn&apos;t hit the button. When he does, Release will then fire.</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://christophertimberlake.com/content/images/2022/12/multiple-approver-changemangement-4.png" class="kg-image" alt loading="lazy" width="1432" height="488" srcset="https://christophertimberlake.com/content/images/size/w600/2022/12/multiple-approver-changemangement-4.png 600w, https://christophertimberlake.com/content/images/size/w1000/2022/12/multiple-approver-changemangement-4.png 1000w, https://christophertimberlake.com/content/images/2022/12/multiple-approver-changemangement-4.png 1432w" sizes="(min-width: 720px) 720px"></figure><!--kg-card-begin: markdown--><h2 id="how-to-protect-the-pipeline">How to protect the pipeline.</h2>
<p>If you take this approach. Then it means that the .gitlab-ci.yml is limiting your pipeline from running through and requiring approval for the end steps. What is to prevent someone from just editing that file, removing the approval steps, and then rogue releasing?</p>
<p>Simple answer, We lock the .gitlab-ci.yml file.</p>
<p>Create in the root of your gitlab directory a file named <code>CODEOWNERS</code>. In this file, add a line <code>.gitlab-ci.yml @your-username-here</code>. Now, commit it. Anytime someone wants to edit the pipeline they&apos;ll need specific approval from someone in the codeowners file that&apos;s listed there. Now Michael can&apos;t edit the gitlab-ci.yml file to remove my approvals! <em>Muhaha.</em></p>
<p>If you succeeded, you should now see the following when you look at .gitlab-ci.yml</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://christophertimberlake.com/content/images/2022/12/multiple-approver-changemangement-5.png" class="kg-image" alt loading="lazy" width="1270" height="494" srcset="https://christophertimberlake.com/content/images/size/w600/2022/12/multiple-approver-changemangement-5.png 600w, https://christophertimberlake.com/content/images/size/w1000/2022/12/multiple-approver-changemangement-5.png 1000w, https://christophertimberlake.com/content/images/2022/12/multiple-approver-changemangement-5.png 1270w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[Using GitLab as a Self Service Replacement - Part 1]]></title><description><![CDATA[Learn how to build forms for automation and self-service portals with GitLab CI and GitLab Pages.]]></description><link>https://christophertimberlake.com/using-gitlab-as-a-self-service-replacement-part-1/</link><guid isPermaLink="false">638510629211d615dd84fe24</guid><category><![CDATA[GitLab]]></category><dc:creator><![CDATA[Chris Timberlake]]></dc:creator><pubDate>Thu, 17 Oct 2019 04:00:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Many companies and organizations that deal with IT have some form of Self-Service. Whether this be using something like ServiceNow, Microsoft Sharepoint, or something random like a Rube Goldberg Machine initiated by an e-mail. In either case, they have a system where users can go to a form, initiate a request and have a process ran. This functionality is a key requirement of process control theory. Today we&apos;re going to discuss how we can use GitLab to handle Self-Service.</p>
<p><em>Before we begin, This is not meant to replace ServiceNow or Sharepoint. It&apos;s to show an alternate using GitLab</em></p>
<p>For this article, you&apos;re going to need the following:</p>
<ol>
<li>A GitLab Instance that supports GitLab Pages.</li>
<li>A GitLab Admin account, or Owner Access to (Two) GitLab Repositories.</li>
<li>A WYSIWYG Editor, For this article we&apos;re using Adobe Dreamweaver.</li>
<li>Some Knowledge of HTML</li>
</ol>
<p>For the next Article in this series you&apos;ll need:</p>
<ol>
<li>Some Knowledge of Ansible</li>
<li>A cloud provider account. Vultr, DigitalOcean, Amazon AWS, Etc.</li>
</ol>
<h2 id="the-forms">The Forms</h2>
<p>The most important part of this is the form. After-all, without it, the user can&apos;t make a request. For this we&apos;re going to use GitLab Pages to serve our self-service form. So, at this point, go ahead and login to GitLab and make a new repository. Call it something like &quot;Self-Service Portal&quot;. This should be based off of the Pages/HTML Template. See the picture below for this.</p>
<p><img src="https://christophertimberlake.com/content/images/2022/11/gitlab-selfservice-1.png" alt="gitlab-selfservice-1" loading="lazy"></p>
<p>Once this is done, you&apos;re given a repository with a <em>public/index.html</em> and <em>public/style.css</em> structure. Now we&apos;re going to need to pull down that repo and modify the index.html file.</p>
<!--kg-card-end: markdown--><pre><code class="language-bash">git clone https://repourl/repo</code></pre><!--kg-card-begin: markdown--><p>With the files downloaded, we&apos;ll need to open that <em>public/index.html</em> file with your HTML WYSIWYG Editor. If you remember i&apos;m using Dreamweaver. Once open you&apos;re going to need to do the following..</p>
<ol>
<li>Open the file in the WYSIWYG Editor.</li>
<li>Delete all text in the body such as <code>This is a simple plain-HTML website on GitLab Pages, without any fancy static site generator.</code></li>
<li>You&apos;re going to want to add a form to this page.</li>
<li>Within that form, you&apos;re going to want to add fields and labels. One label and field for each value.</li>
</ol>
<p>Below is a screenshot from my Dreamweaver editor showing what I&apos;ve created so far. Don&apos;t worry about the details of things now, just try to get a form laid out.</p>
<p><img src="https://christophertimberlake.com/content/images/2022/11/gitlab-selfservice-2.png" alt="gitlab-selfservice-2" loading="lazy"></p>
<h2 id="the-cicd-portion">The CI/CD Portion</h2>
<p>Now that we have that setup, we&apos;re going to need to create another GitLab Repo. This repo is going to house all of our infrastructure code. The goal for this repo is to contain no secrets, no specific data, it&apos;s meant to be a complete dumb pipeline. It should only know what we pass from the form, This means all secrets, keys, data, etc should be from the form.</p>
<p><strong>Below is an example of how it should be setup.</strong></p>
<p><img src="https://christophertimberlake.com/content/images/2022/11/gitlab-selfservice-3.png" alt="gitlab-selfservice-3" loading="lazy"></p>
<p><strong>At this point, you have a completely blank repository. It should have a readme, and that&apos;s all.</strong></p>
<p><img src="https://christophertimberlake.com/content/images/2022/11/gitlab-selfservice-4.png" alt="gitlab-selfservice-4" loading="lazy"></p>
<p>Do you see that button that says &quot;Set Up CI/CD&quot;? Go ahead and click that, this will take you to the GitLab Code Editor. We&apos;re going to walk through creating a GitLab CI File. You can read tons more docs on this <a href="https://docs.gitlab.com/ee/ci/yaml/">here</a>.</p>
<p>For starters, we&apos;re going to want to define the stages our jobs will run in. For this example, we&apos;re going to have only two. The first is a sanity check, the second is the infrastructure portion. The top of your CI File should look like this..</p>
<!--kg-card-end: markdown--><pre><code class="language-yaml">stages:
    - check-vars
    - do-infrastructure</code></pre><p>Pretty descriptive right? Next we&apos;re going to define the first stage, the one that checks the variables. For this article, it&apos;s just echoing them.</p><pre><code class="language-yaml">check-variables:
    stage: check-vars
    script:
        - echo &quot;VM Name is $VM_Name&quot;
        - echo &quot;Amazon AWS Region is $AWS_Region&quot;
        - echo &quot;E-Mail is $EMAIL_NOTIFICATION&quot;
        - echo &quot;Reason is $REASON&quot;</code></pre><p>The first line above is defining the job name `check-variables`. Then we indent and define the stage, it needs to match one of our previously defined stages. `stage: check-vars`.The script block defines what we need to do in this job. Imagine each line in a bash file is listed out with a - preceding it to designate it as an array item. For the infrastructure block, we&apos;re just going to echo some garbage for now.<br></p><pre><code class="language-yaml">build-infrastructure:
    stage: do-infrastructure
    script:
        - echo &quot;Do Infrastructure in Article #2 W00T!&quot;
</code></pre><!--kg-card-begin: markdown--><p><strong>Go ahead and commit those changes. With that, our pipeline should be good to go.</strong></p>
<h2 id="connecting-the-form-to-the-cicd">Connecting The Form to the CI/CD.</h2>
<p>Now, the big question arises, How the heck do we connect these two things? It&apos;s a secret, it&apos;ll cost you $3.50... No, it&apos;s simple. We&apos;ll simply use a <a href="https://docs.gitlab.com/ee/ci/triggers/">pipeline trigger</a>.</p>
<p><strong>In the left navigation of the Pipeline Repo, go to Settings -&gt; CI/CD. Then click expand on Pipeline Triggers</strong></p>
<p><img src="https://christophertimberlake.com/content/images/2022/11/gitlab-selfservice-5.png" alt="gitlab-selfservice-5" loading="lazy"></p>
<p>From here you&apos;ll want to create a trigger. In the Trigger Description field, enter something in. In our case, we&apos;re going to call it &quot;Self-Service Trigger&quot;. Once you create it, you&apos;ll be granted a pipeline trigger token. <strong>Save this token somewhere</strong>. Then below that you&apos;ll see a number of use cases for the pipeline trigger. Find the section <em>&quot;Use Webhook&quot;</em>, copy the URL thats in that field. Entirely.</p>
<p><img src="https://christophertimberlake.com/content/images/2022/11/gitlab-selfservice-6.png" alt="gitlab-selfservice-6" loading="lazy"></p>
<div class="notice important">
    <h3>Pipeline Trigger Ownership &amp; Security</h3>
    <p>This token when used in this form manner is publically accessible. Meaning people can trigger pipelines with this token and insert false data. It&apos;s important you both restrict access to the form in a production environment, and that your pipeline does not contain secrets. All variables+secrets should be injected by the user via the form. This way, without inserting the API Key, the pipeline job fails.</p>
    <br>
    <p>The person who makes a pipeline trigger is defined as that trigger&apos;s owner. This means, any time that trigger as ran. The pipeline will run with their permissions. Consider making a lower-access account for this.</p>
    <p><a href="https://docs.gitlab.com/ee/user/project/pages/introduction.html#gitlab-pages-access-control-core">Consider Enabling GitLab Pages Security</a></p>
</div>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Now that we have that information, We need to update our forms. Go ahead and find your HTML file (should be named index.html). Open it with notepad or your favorite error.</p>
<p>The first thing we need to setup the form to POST to the Trigger pipeline. You&apos;ll need to find your <code>&lt;form</code> block and add <code>action=&quot;webhookurl&quot;</code> with the <code>REF_NAME</code> being master, and <code>TOKEN</code> being replaced with your token. Then you need to add <code>method=&quot;POST&quot;</code> to the same line, this marks it as a POST request. My example is below.</p>
<!--kg-card-end: markdown--><pre><code class="language-html">&lt;form id=&quot;feedbackForm&quot; class=&quot;text-center&quot; action=&quot;https://gitlab.com/api/v4/projects/14883084/ref/master/trigger/pipeline?token=943227eec64c3ac6ab4e8436a45bfa&quot; method=&quot;POST&quot;&gt;</code></pre><!--kg-card-begin: markdown--><p>Now, when we submit the form it will trigger the pipeline! AWESOME! Except... We also want to pass variables. Remember above how we referenced the <code>$VM_Name</code>, <code>$AWS_Region</code>, <code>$EMAIL_NOTIFICATION</code>, <code>$REASON</code> above? Now, we need to assign those in our form. To link these values from the HTML Form to the pipeline, we need to set the <code>name</code> of the form field. This name <strong>must be</strong> named like this: <code>variables[VM_Name]</code> or <code>variables[AWS_Region]</code>.</p>
<p>Go ahead and make those changes, add <code>name=&quot;variables[VM_Name]&quot;</code> to your input field, or select the field in the editor and set the name. An example of this can be seen below.</p>
<p><img src="https://christophertimberlake.com/content/images/2022/11/gitlab-selfservice-7.png" alt="gitlab-selfservice-7" loading="lazy"></p>
<p><em>Now save this file, and commit it to the self-service repository.</em> The pipeline should kick off and your form should then become visible. You can find the URL for your self-service at <em>Settings -&gt; Pages</em>. It may take up to five minutes for this to happen.</p>
<h2 id="in-closing">In Closing</h2>
<p><em>At this point, if you&apos;ve followed everything. You should be able to submit the form and see the pipeline kick off. Let us know if you have any concerns!</em></p>
<p>You can see the <a href="https://lackastack.gitlab.io/article-repos/gitlab-selfservice/">example of the form i&apos;ve created here.</a></p>
<p>You can see your <a href="https://gitlab.com/lackastack/article-repos/gitlab-selfservice-pipeline/pipelines">pipeline being ran and it&apos;s result here.</a></p>
<p><img src="https://christophertimberlake.com/content/images/2022/11/gitlab-selfservice-8.png" alt="gitlab-selfservice-8" loading="lazy"></p>
<p><img src="https://christophertimberlake.com/content/images/2022/11/gitlab-selfservice-9.png" alt="gitlab-selfservice-9" loading="lazy"></p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>