Christopher timberlake in Development 30 minutes

RE: GitLab vs JFrog: Who Has the Right Stuff?

Every so often in the Tech world there exists some conflict. Usually it’s great for customers and usually it steps from an article or impact statement from a group or company declaring something to be true. It’s always important to see multiple views when these occur. It’s also important to take it from a non-biased and clear view. Because there’s a lot of nuance. While one group may be technically accurate, the core of the statement made might still be largely true.

Take the current Gnome Theme Conflict as an example. While it’s true that technically Applications and Linux Distributions can change themes for the Application/Distribution. The actions taken by the Gnome Community make it exceedingly difficult and incompatible to do so. So while the argument “Gnome to prevent thumbing” is technically incorrect, functionally it might as well be true.

However… Every so often I read something that quite literally makes my eyes roll and angers me. Because not only do I know what is being claimed is not accurate or true; I can prove the author knows it’s also not true. Yet, we wouldn’t be here if it wasn’t published. So here we are.. Dissecting GitLab vs JFrog: Who Has the Right Stuff?

PDF Backup

VERY IMPORTANT! The statements made here are my own. They are not a reflection of my employer, my customers, my company, or anyone else i'm affiliated with. These statements are my own. Any future looking statements should not be considered investment advice, nor absolutes. On this blog, I represent myself and myself alone.


While impressive in scope, most of GitLab’s offerings are currently far from mature. Of the more than 40 rightward items not related to source code or CI/CD in GitLab’s maturity chart , only 3 are “complete.” Nearly half of those remaining are still “minimal.”

The most interesting part about this screenshot.. Is that the top is cut off. The top of that screenshot.. The part you can’t see says:

GitLab has a broad scope and vision , enabling organizations to collaboratively plan, build, secure, and deploy software to drive business outcomes faster. To provide teams with complete transparency, consistency, and traceability, we are constantly iterating on existing and new features. Some stages and features are more mature than others. To convey the state of our feature set and be transparent , we have developed a maturity framework for categories, application types, and stages that considers both adoption and user experience. For a deeper understanding of the multiple dimensions that contribute to product success, please visit category health . We are present-day pessimists and long-term optimists and maturities will change, including changes to lower maturity rating, to reflect the bar we set for ourselves, our position in the market and for customers. Contributions from our community are an essential part of achieving this overall vision for GitLab.

To be clear; JFrog is using GitLab’s own words against them. Intentionally omitting context. That context is that GitLab has a transparent and very clear formula for what makes a feature “Complete.” This formula includes feedback from internal and external customers. GitLab continually probes and surveys customers about their experience and attempts to resolve any issues included.

For a feature to be considered “Loveable” it has to pass a minimum 100% pass rate. With internal and external customers, with large sample sizes. For a feature to be complete it requires a 80% pass rate. These 100% and 80% scores are formulates by UX, Scale and Customer ratings. There are many cases where even a single bad report by a customer can drop a feature from “Loveable” to “Viable”.

Quite frankly, JFrog is using GitLab’s “present-day pessimistic” formulas against them. Without telling you, the viewer the truth. (If you want to learn more about this metric and how these numbers are created. Click Here.)


The GitLab suite provides a common user interface, and they stress as a core value that no feature is considered complete unless it can be controlled through the UI.
While this provides the appearance of a unified platform, it obscures the reality that many of the GitLab tools are not genuinely integrated with each other. Behind the UX curtain, few of the pieces interoperate smoothly — often not even sharing a common metadata model. Even as DevOps seeks to break down silos, many of GitLab’s DevOps and DevSecOps solutions remain siloed from each other.

There’s not even an example of what’s being said here. This is a common misdirection and a fallacy. However, I want to address it. Because this is something that I believe separates GitLab from many other companies.

The reason why this exists is because GitLab has a core belief that everyone can contribute. This means technical and non-technical folks alike. The UI and UX is not a facade that is used to hide some technicality. It is a tool that allows folks to contribute with GitLab rapidly and quickly.

This is why GitLab has such a focus and desire to have everything in the UI and readily/easily editable by all folks. Including folks who are disabled. It also means any feature not UI/UX complete gets a lower customer satisfaction category, and remains in the Viable as opposed to Complete or Lovable category.

You can read what GitLab says about this below, or in complete here.


While GitLab seeks to provide end-to-end tools for DevOps, they do not offer an end-to-end method for DevOps success.

When it comes to DevOps, a portfolio is not a platform. And our long experience in DevOps has shown that digital transformation success lies in managing your binaries, not just your code.

Hopefully you caught this already. But if you didn’t, this is the same FUD I pointed out above. There are no examples here, just a fallacy to plant seeds of doubt for the next section. Because there are no examples, there is nothing for me to even prove incorrect.

The Right Stuff: Development vs Delivery

A binaries-centric approach — JFrog’s expertise — is the only way to successfully automate a modern organization’s software lifecycle that ensures trust and speed of delivery.

So.. JFrog is the only one who holds the truth for how binary deployment is done? 🤔 … We’re going to touch on SBOM + Traceability soon..

Package Types

By these measures, GitLab’s current local repository support for 11 package types offers a solid start — although far fewer than the over 30 package types natively supported by Artifactory. Like Artifactory, GItLab also provides a Generic repo type , enabling users to centrally manage additional file types that are part of their releases – such as images, zip files, docs, and more.

JFrog is right here. GitLab is continually adding to it’s package types and support. There is work to be done there, one of the things that bothers me personally is the lack of RPM/Debian packages inside of GitLab. I’m also sure Debian, Gnome, and Red Hat who use GitLab would love this support.

But aside from that you’ll note that examples of what is listed for the Generic Repo Type is limited to “images, zip files, docs and more.” The Generic Repo Type is used for many things, including Debs, RPMs, and other packages. Because of this I can simply do yum install ${PATH_TO_RPM} and immediately pull down the RPM from the GitLab Generic Repo. However, I know that GitLab is working on this functionality. Which again is why that feature is not “viable”

Very few, if anyone is using GitLab’s Generic Repo for “Images, Zip Files, or Docs.” Those are often regarded as Artifacts from a CI/CD Build.

Proxy Repository Caching

Everything said here is true. I wish GitLab had this functionality. It’s probably the #1 blocker for my customers to move off of Artifactory.

Build Metadata for SBOM and Traceability

This sections is honestly what inspired me to write this article. This section is very very incorrect. I take zero issue what JFrog states their capabilities are. However the description of GitLab’s capabilities is incorrect. I will concede that GitLab may not have the exact visible dashboards or ui/ux. But the functionality exists.

GitLab CI/CD offers some analog to a “build” which they call deployments . This enables you to store a record of the build event with your source code, along with where it was (or, for manual deployments, will be) deployed. But this data describes the event, not the deployed binary, and does not include any of the metadata required to produce an SBOM or replicate a deterministic build.

This is what the Environments Dashboard looks like. For customers with many different environments, You’ll see each here. The ‘ctimberlake-approval’ and ‘versable-approval’ are two separate environments. In this section, You will see the commit hash that was deployed, the commit message, when it was deployed, and by hovering over my user icon under “Deployment” you can see it was me who deployed it.

More importantly, You can click Job. Which will show you the entire deployment log. From which you can see the following…

  1. The exact gitlab runner who did the deployment.
  2. The exact container the deployment was done inside, including sha.
  3. The person who triggered the deployment.
  4. The SHA of the Git Commit that was deployed
  5. Whether it finished or not.
  6. If artifacts are setup, which they aren’t here, You can even see & download..
    1. The exact binary deployed.
    2. All security scan logs against the source code AND binary.
    3. A complete BOM of everything in the binary and/or container.
    4. A complete list of all commits inside the binary.
    5. A complete list of all licenses assigned to the binary.
    6. Any other build information and metadata for this deployment.

Instead of metadata, GitLab’s dependency list function works only with its dependency scanning tool to produce dependency data by parsing the source in the GitLab VCS. Consequently, GitLab cannot produce an absolutely reliable SBOM from your deployable binary— they can only reconstruct what is likelyto be in your binary as deduced from the source code.

So, first off, this isn’t a proper description of what dependency scanning does, nor is it applicable in this case.

Secondly, The rest of this statement is just bullcrap. For true security, you don’t just want a binary, you want… A Binary, what created that binary, the code comprised of that binary, any and all logs and introspection into that binary, and the environment consuming that binary.

The statement GitLab can only reconstruct what is in the binary is also false. With GitLab CI, you can include any and all third party tools to perform introspection on this binary. You’re not bound to what GitLab provides, you’re bound to whatever tools you can use in a container. GitLab then provides a framework via Artifacts, the Security Dashboard, and the UI/UX JFrog insulted earlier to surface and make visible all the details of what’s in your binary.

You need to tell the whole story of how your binary came to be, not just what’s inside it. GitLab does both.

Build Promotion and Release Staging

Artifactory enables build promotion, in which an immutable binary runs through the entire SDLC. With a repository for each SDLC stage, a build with its metadata can be promoted simply by shifting it to the next repo in sequence.

In this “build once and promote” method, the same build is evaluated at every stage, assuring absolute consistency through the DevOps pipeline.

JFrog hits the nail on the head here. It is super imperative that whatever binaries you make are immutable and shipped with source code, the binary, the logs/records of how it came to be, and all build materials. If a new binary is desired, the entire binary creation process must start over. This is how you avoid things like ransomware, and many other supply chain attacks.

While GitLab provides repos for generic artifacts like release binaries and container registries for Docker images, GitLab simply doesn’t provide built-in support for build promotion. While GitLab CI/CD does support recording deployments to defined environments in your GitLab projects, this feature is unavailable to use with other automation tools.

This part though? No, this is entirely false. Anytime a CI/CD Job inside GitLab is triggered and tied to a deployment, that deployment records all the information about what that job did, what artifacts were in it, and which binaries, etc were released. You’ll notice above my deployment screenshots were over a year old, if you hover over the date for the exact time. You’ll find it’s closer to two years, that’s a decent history.

But let’s keep going, let’s talk about that part where GitLab doesn’t provide built-in support for build promotion…

  1. GitLab provides protected environments. This means any CI/CD Job assigned to this environment requires a manual or automated approver to deploy too. It’s quite easy to add a Dev, QA, Acceptance, and Prod deployment job inside of GitLab CI. Then tie a deployment to each, with an approver.
  2. GitLab also provides multiple security features to deploy safely.
  3. Next, GitLab provides Manual Rollouts of builds, Timed Rollouts, and Blue-Green Deployment.

** Now, I know what you’re going to say!** ”Chris, JFrog means they can verify the binary goes to the same spot!”

Yes, and when a Trusted GitLab CI process starts, the source code goes in one side, and ends up being turned into a binary. Then, both the source and binary are scanned, and the binary is passed to a deployment job to be deployed.

this feature is unavailable to use with other automation tools.

Also, these features with with S3, Container, Lambda, SSH, Kubernetes, and well any other type of deployment you can imagine.

Distribution Solution


GitLab offers no distribution solution at all, falling far short of their claim to be an end-to-end DevOps platform.

Alright look, Artifacts on GitLab are stored on S3 Storage, as are Repositories. S3 Storage can be as distributed as you want, and also as not distributed as you want.

Does anyone want to argue Amazon S3 isn’t distributed?

Connect and Automate

While GitLab’s CI/CD is strongly integrated with GitLab source code repositories, each tool for build, test, and deployment must be invoked through their individual command line interfaces in shell scripts.

This isn’t really true. GitLab CI uses containers, those containers can include binaries, shells scripts, python scripts, javascript executables, etc. It doesn’t matter.

Usability wise, which is what I think JFrog is talking about here. They may have us beat in some ways. GitLab CI does require writing YAML. Whereas JFrog may have means to setup a pipeline quickly. GitLab is working on this, and there are third party visual ci editors. So for now, point goes to JFrog.

Simple CI/CD Creation / Complex Pipelines

No misconceptions here, JFrog talks more about their product than GitLab. But nothing is wrong here. GitLab has feature parity in both sections.

Signed Pipelines

As part of the unified JFrog Platform, Pipelines creates a blockchain-like, cryptographically-signed ledger for each run of a pipeline.

No it doesn’t. The page linked has no examples or descriptions of anything remotely like a blockchain.

What it talks about is the ability to use a private key to verify artifacts with a Zero-Trust model. The exact example shown in JFrog’s documentation about creating a private key and public key, then signing items with the public key and destroying the private key. All very easily done with GitLab.

APIs, CLI, and Integrations

GitLab CI/CD is tightly coupled with GitLab source control repositories. While the JFrog Platform provides end-to-end DevOps with Pipelines, you can also use Artifactory with CI/CD tools you might prefer — whether
that’s Jenkins, CircleCI, or even GitLab CI/CD. JFrog provides several integrations out-of-the-box, or choose from a large family of technology partner integrations.

Yeah, GitLab does all of these things too. This is a classic case of misdirection, designed to imply GitLab doesn’t have them.

Protect Your Business

SCA Scanning

GitLab Dependency Scanning tool is tightly integrated — and can only be used — with GitLab source control repositories and GitLab CI/CD to identify vulnerable open-source dependency references in source code. It scans source code from within a CI/CD pipeline; information about vulnerabilities found and their severity is reported in the merge request, so a developer can act to remediate.GitLab does not scan packages in GitLab Package Registry.

This is a misdirection. Vulnerabilities found in a CI/CD pipeline appear not only in merge requests, but also the Security Dashboard we spoke about earlier. They also appear in JSON and Log files which can be aggregated elsewhere.

Also, GitLab CI can scan any package you want, it just has to exist in the container directory that’s being scanned.. The GitLab scan tools are VERY configurable. For example Container Scanning allows you to scan any docker container anywhere.

Container Scanning

GitLab container scanning uses the open source Trivy engine as of GitLab 14.0. Container scanning is currently not integrated into the GitLab Container Registry flow – Docker images can only be scanned through a separate job in GitLab CI/CD. Images pushed to the registry are not automatically scanned.

How awkward for JFrog.

No, but seriously, who cares about container scanning tag. I care about everything done to create that container. Which includes dependency scans, container scans, sast scans, etc. All of which is now visible from the new Container UI when you look at the pipeline that created the container.

Are you seeing a pattern here? Everything about the history of something built with GitLab can be seen by looking at the pipeline historical record that created it. Because the history of an artifact is more than a single scan or signature.

Automated Enforcement Policies

GitLab’s Dependency Scanning, Container Scanning, License Compliance and other security tools all provide reports that must be read, evaluated, and acted upon by a human operator — a set of high-friction manual steps that inhibit speedy software delivery.

This isn’t remotely true. Yes, a human operator needs to look at the vulnerabilities that show up. Are they implying that a human does not ever have to validate or see a vulnerability?

GitLab surfaces all these vulnerabilities in a Security Dashboard that makes it easy to take action on these items. More importantly it provides visibility into all vulnerabilities so users can see them and know their target surface.

JFrog Xray empowers security teams to configure rules and policies for vulnerability severity and license conflicts, and set up automated watches to detect violations and enforce those policies after a scan. Through JFrog partner integrations, you can also report violations through analytic tools such as Splunk or DataDog , or alert teams with incident reports through PagerDuty , Slack, or MS-Teams.

Yes, GitLab can do that too.

The area where JFrog has a point..

GitLab is not great at scanning artifacts and items at rest. However, this can easily be achieved via a scheduled pipeline that does introspection on all of your artifacts and binaries.

Then again, if the Binary contains the same hash as it did when it was uploaded. Do you need to rescan them at rest 24/7?

Impact Analysis

Yeah, GitLab doesn’t do this.

IDE Integrations

As noted above, GitLab Dependency Scanning is driven through GitLab CI/CD. No IDE plugins for GitLab are currently available

No one tell JFrog about this. - VS Code has an IDE Plugin for GitLab that will allow you to execute a pipeline against your code. This way all policies and scan enforcements happen to your code in a stable and repeatable environment.

Running the scans on your IDE locally is cool. The issue is it’s in an untrusted environment. The CI/CD Environment of GitLab is trusted. Plus if you can make GitLab CI/CD’s pipelines fast by throwing cloud resources at them… It’s better IMO. No developer I know likes to hear the jet engine of their MacBook fans when doing scans locally.

Scale to Infinity

Regional Geo-Replication

GitLab Geo supports limited site replication through unidirectional mirroring from a single primary GitLab site to read-only secondary sites. We consider this inadequate to support the way that global development teams collaborate.

GitLab moves WAY more than artifacts between Geo Instances. A bi-directional sync is a very difficult process, that admittedly GitLab does not have. But I’d also challenge as to why you need this.

Within each federated repository, changes made to artifacts or to the repository configuration on one site are automatically synchronized to other member sites (up to 10).

You can have multiple GitLab Geo instances.

Multicloud and Hybrid / Unlimited Users

GitLab hosts all cloud (SaaS) services on a single cloud platform, GCP in the U.S , making multi-cloud redundancy impossible. Although GitLab has both a SaaS service and self-managed installation option, these are separate and users cannot work between and across them.

These are partly true. I’m not going to comment on them.

Private Distribution Network

This is easily done on GitLab Self Hosted. It can also be achieved with Private Runners.

In Closing… It’s my turn for FUD.

GitLab’s ambitions are certainly large, according to their mission statement . “Our BHAG over the next 30 years is to become the most popular collaboration tool for knowledge workers in any industry.”

Why wait three decades? The JFrog DevOps Platform, powered by the industry’s most popular tool for binaries management, is available today and you can start for free !

So… You did read the mission statement about how everyone can contribute.. and still chose to bash the UI/UX part as a flaw?

Why is it, of all the articles on your blog. No one wanted to put their name on this article. No one wanted to claim themselves as the author.