Anchore https://anchore.com/ Software supply chain security solutions Fri, 10 May 2024 17:20:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://anchore.com/wp-content/uploads/2021/12/Anchore_Mark_Blue-100-1.png Anchore https://anchore.com/ 32 32 FedRAMP Requirements Checklist for Container Vulnerability Scanning https://anchore.com/white-papers/fedramp-requirements-for-containers-checklist/ https://anchore.com/white-papers/fedramp-requirements-for-containers-checklist/#respond Fri, 10 May 2024 17:20:01 +0000 https://anchore.wpengine.com/?p=987463126 With the clock ticking on new vulnerability scanning rules, organizations must adhere to a number of FedRAMP requirements. Prepare containerized applications for FedRAMP authorization with this checklist.

The post FedRAMP Requirements Checklist for Container Vulnerability Scanning appeared first on Anchore.

]]>
With the clock ticking on new vulnerability scanning rules, organizations must adhere to a number of FedRAMP requirements. Prepare containerized applications for FedRAMP authorization with this checklist.

The post FedRAMP Requirements Checklist for Container Vulnerability Scanning appeared first on Anchore.

]]>
https://anchore.com/white-papers/fedramp-requirements-for-containers-checklist/feed/ 0
RMF and ATO with RAISE 2.0 — Navy’s DevSecOps solution for Rapid Delivery https://anchore.com/blog/raise-2-overview/ https://anchore.com/blog/raise-2-overview/#respond Thu, 09 May 2024 12:00:00 +0000 https://anchore.com/?p=987473757 In November 2022, the US Department of Navy (DON) released the RAISE 2.0 Implementation Guide to support its Cyber Ready initiative, pivoting from a time-intensive compliance checklist and mindset to continuous cyber-readiness with real-time visibility. RAISE 2.0 embraces a “shift left” philosophy, performing security checks at all stages of the SDLC and ensuring security issues […]

The post RMF and ATO with RAISE 2.0 — Navy’s DevSecOps solution for Rapid Delivery appeared first on Anchore.

]]>
In November 2022, the US Department of Navy (DON) released the RAISE 2.0 Implementation Guide to support its Cyber Ready initiative, pivoting from a time-intensive compliance checklist and mindset to continuous cyber-readiness with real-time visibility. RAISE 2.0 embraces a “shift left” philosophy, performing security checks at all stages of the SDLC and ensuring security issues are found as early as possible when they are faster and easier to fix.

3 Things to Know about RAISE 2.0 

RAISE 2.0 provides a way to automate the Implement, Assess, and Monitor phases of the Risk Management Framework (RMF). New containerized applications can be developed using an existing DevSecOps platform (DSOP) that already has an Authorization to Operate (ATO) and meets the RAISE requirements. This eliminates the need for a new application to get its own separate ATO because it inherits the ATO of the DSOP.

There are three primary things you need to know about RAISE 2.0:

  1. Who must follow RAISE 2.0? 
    • RAISE 2.0 only applies to applications delivered via containers
    • RAISE 2.0 requires all teams starting or upgrading containerized software applications to use a pre-vetted DevSecOps platform called a RAISE Platform of Choice (RPOC)
  2. What is an RPOC?
    • RPOC is a designation that a DevSecOps Platform (DSOP) receives after its inheritable security controls have received an ATO
    • A DSOP that wants to become an RPOC must follow the process outlined in the Implementation Guide
  1. How does an RPOC accelerate ATO?
    • The containerized application will be assessed and incorporated into the existing RPOC ATO and is not required to have a separate ATO

What are the requirements of RAISE?

The RAISE 2.0 Implementation Guidelines outline the requirements for an RPOC. They represent best practices for DevSecOps platforms and incorporate processes and documentation required by the US government.  

The guidelines also define the application owner’s ongoing responsibilities, which include overseeing the entire software development life cycle (SDLC) of the application running in the RPOC. 

RPOC Requirements

The requirements for the RPOC include standard DevSecOps best practices and specifically call out tooling, such as:

  • CI/CD pipeline
  • Container Orchestrator
  • Container Repository
  • Code Repository

It also requires specific processes, such as:

  • Continuous Monitoring (ConMon)
  • Ongoing vulnerability scans
  • Ad-hoc vulnerability reporting and dashboard
  • Penetration testing
  • Ad-hoc auditing via a cybersecurity dashboard

If you’re looking for a comprehensive list of RPOC requirements, you can find them in the RAISE 2.0 Implementation Guide on the DON website.

Application Owner Requirements

Separate from the RPOC, the application owner that deploys their software via an RPOC has their own set of requirements. The application owner’s requirements are focused less on the DevSecOps tooling and more on the implementation of the tooling specific to their application. Important callouts are:

  • Maintain a Security Requirements Guide (SRG) and Security Technical Implementation Guide (STIG) for their application
  • CI/CD implementation
  • Repository of vulnerability scans (e.g., SBOMs, container security scans, SAST scans, etc.)

Again, if looking for a comprehensive list of application owner requirements, you can find them in the RAISE 2.0 Implementation Guide on the DON website.

Meet Raise 2.0 requirements with Anchore Federal

Anchore Federal is the leading software supply chain security platform supporting public sector agencies and military programs worldwide. It helps RPOCs and applications owners meet RAISE 2.0 requirements by offering an SBOM management platform, container vulnerability scanner, flexible policy engine and comprehensive reporting dashboard.

Anchore Federal integrates seamlessly with CI/CD pipelines and enables teams to meet essential RPOC requirements, including RPOC requirements 4, 5, 15, 19, 20, and 24. The reporting capabilities also assist in the Quarterly Reviews, fulfilling QREV requirements 6 and 7. Finally it helps application owners to meet requirements APPO 7, 8, 9, and 10.  

Continuous container monitoring in production

Anchore Federal includes the anchore-k8s-inventory tool, which continuously monitors selected Kubernetes clusters and namespaces and identifies the container images running in production. This automated tool that integrates directly with any standard Kubernetes distribution allows RPOCs to meet “RPOC 4: Must support continuous monitoring”.

The Anchore Federal UI provides dashboards summarizing vulnerabilities and your compliance posture across all your clusters. You can drill down into the details for an individual cluster or namespace and filter the dashboard. For example, selecting only high and critical vulnerabilities will only show the clusters and namespaces with those vulnerabilities.

If a zero-day or new vulnerability arises, you can easily see the impacted clusters and containers to assess the blast radius.

Security gates that directly integrate into CI/CD pipeline

Anchore Federal automates required security gates for RPOCs and application owners (i.e., “RPOC 5: Must support the execution of Security Gates” and “APPO 9: Application(s) deployments must successfully pass through all Security Gates”). It integrates seamlessly into the CI/CD pipeline and the container registry, automatically generates SBOMs, and scans containers for vulnerabilities and secrets/keys (Gates 2, 3, and 4 in the RAISE 2.0 Implementation Guide).

Anchore’s security gating feature is backed by a flexible policy engine that allows both RPOCs and application owners to collaborate on the security gates that need to be passed for any container going through a CI/CD pipeline and customized gates specific to the application. The Anchore Policy Engine accelerates the move to RAISE 2.0 by offering pre-configured policy  packs, including NIST SP 800-53, and extensive docs for building custom policies. It is already used by DevSecOps platforms across the DON, including Black Pearl, Party Barge, and Lighthouse installations, and is ready to deploy in classified and air-gapped environments, including IL4 and IL6. 

Production container inventory

On top of continuously monitoring production for vulnerabilities in containers, the anchore-k8s-inventory tool can be utilized to create an inventory of all containers in production which can then be queried by the container registry in order to meet “RPOC 15: Must retain every image that is currently deployed”. This easily automates this requirement and keeps DON programs compliant with no additional manual work.

Single-pane-of-glass security dashboard

Anchore Federal maintains a complete repository of SBOMs, vulnerabilities, and policy evaluations. Dashboards and reports provide auditability that containers have met the necessary security gates to meet “RPOC 20: Must ensure that the CI/CD pipeline meets security gates”.

Customizable vulnerability reporting and comprehensive dashboard

Anchore Federal comes equipped with a customizable report builder for regular vulnerability reporting and a vulnerability dashboard for ad hoc reviews. This ensures that RPOCs and application owners meet the “RPOC 24: Must provide vulnerability reports and/or vulnerability dashboards for Application Owner review” requirement.

Aggregated vulnerability reporting

Not only can Anchore Federal generate application specific reporting, it can provide aggregated reporting for vulnerabilities for all applications and containers on the DPOC. This is vital functionality for meeting the “QREV 6: Consolidated vulnerabilities report of all applications deployed, via the RAISE process” requirement.

Generate and store security artifacts

Anchore Federal both generates and stores security artifacts from the entire CI/CD pipeline. This meets the software supply chain security requirements of “QREV 7: Application Deployment Artifacts”. Specifically this includes, SBOMs (i.e., dependency reports), container security scans, SRG/STIG scans.

Anchore Federal produces a number of the required reports, including STIG results, container security scan reports, and SBOMs, along with pass/fail reports of policy controls.

STIG results are generated by Anchore’s Static STIG Checker tool. This automates STIG scanning and makes STIG compliance a breeze. 

Remediation recommendations and automated allowlists

RAISE 2.0 requires that Application Owners remediate or mitigate all vulnerabilities with a severity of high or above within 21 calendar days. Anchore Federal helps speed remediation efforts to meet the 21-day deadline. It suggests recommended fixes and includes an Action Workbench where security reviewers can specify an Action Plan detailing what developers should do to remediate each vulnerability. 

Anchore Federal also provides time-based allowlists to enforce the 21-day timeline. With the assistance of remediation recommendations and automated allowlists, application owners knows that their applications are always in compliance with “APPO 7: Must review and remediate, or mitigate, all findings listed in the vulnerabilities report per the timelines defined within this guide”.

Automate STIG checks in CI/CD pipeline

The RPOC ISSM and the Application Owner must work together to determine what SRGs and STIGs to implement. Anchore’s Static STIG Checker tool automates STIG checks on Kubernetes environments and then uploads the results to Anchore Federal through a compliance API. That allows users to leverage additional functionality like reporting and correlating the runtime environment to images that have been analyzed.

This automated workflow specifically addresses the “APPO 10: Must provide and continuously maintain a complete Security Requirements Guide (SRG) and Security Technical Implementation Guide (STIG)” requirement.

Getting to RAISE 2.0 with Anchore Federal

Anchore Federal is a complete software supply chain security solution that gets RPOCs and application owners to RAISE 2.0 faster and easier. It is already proven in many Navy programs and across the DoD. With Anchore Federal you can:

  • Automate RAISE requirements
  • Meet required security gates
  • Generate recurring reports
  • Integrate with existing DSOPs

Anchore Federal was designed specifically to automate compliance with Federal and Department of Defense (DoD) compliance standards (e.g., RAISE, cATO, NIST SSDF, NIST 800-53, etc.) for application owners and DSOPs that would rather utilize a turnkey solution to software supply chain security rather than DIY a solution. The easiest way to achieve RAISE 2.0 is by adopting a DoD software factory which is the general version of the Navy’s RPOC.

If you’re interested to learn more about how Anchore can help your organization embed DevSecOps tooling and principles into your software development process, click below to read our white paper.

The post RMF and ATO with RAISE 2.0 — Navy’s DevSecOps solution for Rapid Delivery appeared first on Anchore.

]]>
https://anchore.com/blog/raise-2-overview/feed/ 0
Navigate SSDF Attestation with this Practical Guide https://anchore.com/blog/navigate-ssdf-attestation-with-this-practical-guide/ https://anchore.com/blog/navigate-ssdf-attestation-with-this-practical-guide/#respond Tue, 07 May 2024 12:00:00 +0000 https://anchore.com/?p=987473738 The clock is ticking again for software producers selling to federal agencies. In the second half of 2024, CEOs or their designees must begin providing an SSDF attestation that their organization adheres to secure software development practices documented in NIST SSDF 800-218.  Download our latest ebook to navigate through SSDF attestation quickly and adhere to […]

The post Navigate SSDF Attestation with this Practical Guide appeared first on Anchore.

]]>
The clock is ticking again for software producers selling to federal agencies. In the second half of 2024, CEOs or their designees must begin providing an SSDF attestation that their organization adheres to secure software development practices documented in NIST SSDF 800-218

Download our latest ebook to navigate through SSDF attestation quickly and adhere to timelines. 

SSDF attestation covers four main areas from NIST SSDF including: 

  1. Securing development environments, 
  2. Using automated tools to maintain trusted source code supply chains
  3. Maintaining provenance (e.g., via SBOMs) for internal code and third-party components, and 
  4. Using automated tools to check for security vulnerabilities.  

This new requirement is not to be taken lightly. It applies to all software producers, regardless of whether they provide a software end product as SaaS or on-prem, to any federal agency. The SSDF attestation deadline is June 11, 2024, for critical software and September 11, 2024, for all software. However, on-prem software developed before September 14, 2022, will only require SSDF attestation when a new major version is released. The bottom line is that most organizations will need to comply by 2024.

Companies will make their SSDF attestation through an online Common Form that covers the minimum secure software development requirements that software producers must meet. Individual agencies can add agency-specific instructions outside of the Common Form. 

Organizations that want to ensure they meet all relevant requirements can submit a third-party assessment instead of a CEO attestation. You must use a Third-Party Assessment Organization (3PAO) that is FedRAMP-certified or approved by an agency official.  This option is a no-brainer for cloud software producers who use a 3PAO for FedRAMP.

Details over details – so we put together a practical guide to the SSDF attestation requirements and how to meet them “SSDF Attestation 101: A Practical Guide for Software Producers”. We also included how Anchore Enterprise automates the SSDF attestation compliance by directly integrating into your software development pipeline and utilizing continuous policy scanning to detect issues before they hit production.

The post Navigate SSDF Attestation with this Practical Guide appeared first on Anchore.

]]>
https://anchore.com/blog/navigate-ssdf-attestation-with-this-practical-guide/feed/ 0
Zero Trust Webinar with Security Boulevard https://anchore.com/events/zero-trust-webinar-with-security-boulevard/ https://anchore.com/events/zero-trust-webinar-with-security-boulevard/#respond Mon, 06 May 2024 16:31:50 +0000 https://anchore.com/?p=987473748 The post Zero Trust Webinar with Security Boulevard appeared first on Anchore.

]]>
The post Zero Trust Webinar with Security Boulevard appeared first on Anchore.

]]>
https://anchore.com/events/zero-trust-webinar-with-security-boulevard/feed/ 0
Anchore Enterprise 5.5: Vulnerability Feed Service Improvements https://anchore.com/blog/enterprise-5-5-release-vulnerability-feed-improvements/ https://anchore.com/blog/enterprise-5-5-release-vulnerability-feed-improvements/#respond Thu, 02 May 2024 12:00:00 +0000 https://anchore.com/?p=987473722 Today, we are announcing the release of Anchore Enterprise 5.5, the fifth in our regular minor monthly releases. There are a number of improvements to GUI performance, multi-tenancy support and AnchoreCTL but with the uncertainty at NVD continuing, the main news is updates to our feed drivers which help customers adapt to the current situation […]

The post Anchore Enterprise 5.5: Vulnerability Feed Service Improvements appeared first on Anchore.

]]>
Today, we are announcing the release of Anchore Enterprise 5.5, the fifth in our regular minor monthly releases. There are a number of improvements to GUI performance, multi-tenancy support and AnchoreCTL but with the uncertainty at NVD continuing, the main news is updates to our feed drivers which help customers adapt to the current situation at NIST.

The NIST NVD interruption and backlog 

A few weeks ago, Anchore alerted the security community to the changes at the National Vulnerability Database (NVD) run by the US National Institute of Standards and Technology (NIST). As of February 18th, there was a massive decline in the number of records published with metadata such as severity and CVSS records. Less publicized but no less problematic has been that the service availability of the API that enables access to NVD records has also been erratic during the same period.  


While the uncertainty around NVD continues and recognizing that it continues to be a federally mandated data source (e.g., within FedRAMP), Anchore has updated its drivers to give customers flexibility in how they interact with its data. 

In a typical Anchore Enterprise deployment, NVD data has served two functions. The first is a catalog of CVEs that can be correlated with advisories from other vendors to help provide more context around an issue. The second is as a matching source of last resort for surfacing vulnerabilities where no other vulnerability data exists. This often comes with a caveat that the expansiveness of the NVD database means there is variable data quality which can lead to false positives.

See it in action!

To see the latest features in action, please join our webinar “Adapting to the new normal at NVD with Anchore Vulnerability Feed” on May 7th at 10am PT/1pm EST. Register Now.

Improvements to the Anchore Vulnerability Feed Service

The Exclusion feed

Anchore released its own Vulnerability Feed with v4.1 which provided an ‘Exclusion Feed’ to avoid NVD false positives by preventing matches against vulnerability records that were known to be inaccurate. 

As of v5.5, we have extended the Anchore Vulnerability Feed service to provide two additional data features. 

Simplify network management with 3rd party vulnerability feed 

The first is the ability to download a copy of 3rd party vulnerability feed data sets, including NVD, directly from Anchore, so that transient service availability issues don’t generate alerts. This simplifies network management by only requiring one firewall rule to be created to enable the retrieval of any vulnerability data for use with Anchore Enterprise. This proxy-mode is the default in 5.5 for the majority of feeds but customers who want to continue to benefit from autonomous operations that don’t rely on Anchore and contact NVD and other vendor endpoints directly can continue to use the direct-mode as before.

Enriched data

The second change is that Anchore is continuing to add new CVE records from the NVD database that customers use while providing Enriched Data to the records, specifically for CPE information which helps map affected versions. While Anchore can’t provide NVD severity or CVSS records, which by definition have to be provided by NVD themselves, these records and metadata will continue to allow customers to reference CVEs. This data is available by default in the proxy-mode mentioned above or as a configuration option with the Direct Mode.

How it works

For the 3rd party vulnerability data, Anchore is running the same feed service software that customers would run in their local site which periodically retrieves the data from all of its available sources and structures the data into a format ready to be downloaded by customers. Using the existing feed drivers in their local feed service (NVD, GitHub, etc) customers download the entire database in one atomic operation. This contrasts with the API based approach in the direct-mode which means that individual records are retrieved one at a time which can take time. This can be enabled on a driver-by-driver basis. 

For the enriched data, Anchore is running a service which looks for new CVE records from other upstream sources, for example the CVE5 database hosted by MITRE and custom data added by Anchore engineers. The database tables that host the NVD records are then populated with these CVE updates. CPE data is sourced from vendor records (e.g., Red Hat Security Advisories) and added to the NVD records to enable matching logic.

Looking forward

Throughout the year, we will be looking to make additional improvements to the Anchore Vulnerability Feed to help customers not just navigate through the uncertainty at NVD but reduce their false positive and false negative count in general.

See it in action!

Join our webinar Adapting to the new normal at NVD with Anchore Vulnerability Feed.
May 7th at 10am PT/1pm EST

The post Anchore Enterprise 5.5: Vulnerability Feed Service Improvements appeared first on Anchore.

]]>
https://anchore.com/blog/enterprise-5-5-release-vulnerability-feed-improvements/feed/ 0
Modeling Software Security as Unit Tests: A Mental Model for Developers https://anchore.com/blog/modeling-software-security-as-unit-tests-a-mental-model-for-developers/ https://anchore.com/blog/modeling-software-security-as-unit-tests-a-mental-model-for-developers/#respond Tue, 30 Apr 2024 12:00:00 +0000 https://anchore.com/?p=987473710 Modern software development is complex to say the least. Vulnerabilities often lurk within the vast networks of dependencies that underpin applications. A typical scenario involves a simple app.go source file that is underpinned by a sprawling tree of external libraries and frameworks (check the go.mod file for the receipts). As developers incorporate these dependencies into […]

The post Modeling Software Security as Unit Tests: A Mental Model for Developers appeared first on Anchore.

]]>
Modern software development is complex to say the least. Vulnerabilities often lurk within the vast networks of dependencies that underpin applications. A typical scenario involves a simple app.go source file that is underpinned by a sprawling tree of external libraries and frameworks (check the go.mod file for the receipts). As developers incorporate these dependencies into their applications, the security risks escalate, often surpassing the complexity of the original source code. This real-world challenge highlights a critical concern: the hidden vulnerabilities that are magnified by the dependencies themselves, making the task of securing software increasingly daunting.

Addressing this challenge requires reimagining software supply chain security through a different lens. In a recent webinar with the famed Kelsey Hightower, he provides an apt analogy to help bring the sometimes opaque world of security into focus for a developer. Software security can be thought of as just another test in the software testing suite. And the system that manages the tests and the associated metadata is a data pipeline. We’ll be exploring this analogy in more depth in this blog post and by the end we will have created a bridge between developers and security.

The Problem: Modern software is built on a tower

Modern software is built from a tower of libraries and dependencies that increase the productivity of developers but with these boosts comes the risks of increased complexity. Below is a simple ‘ping-pong’ (i.e., request-response) application written in Go that imports a single HTTP web framework:

package main

import (
	"net/http"

	"github.com/gin-gonic/gin"
)

func main() {
	r := gin.Default()
	r.GET("/ping", func(c *gin.Context) {
		c.JSON(http.StatusOK, gin.H{
			"message": "pong",
		})
	})
	r.Run()
}

With this single framework comes a laundry list of dependencies that are needed in order to work. This is the go.mod file that accompanies the application:

module app

go 1.20

require github.com/gin-gonic/gin v1.7.2

require (
	github.com/gin-contrib/sse v0.1.0 // indirect
	github.com/go-playground/locales v0.13.0 // indirect
	github.com/go-playground/universal-translator v0.17.0 // indirect
	github.com/go-playground/validator/v10 v10.4.1 // indirect
	github.com/golang/protobuf v1.3.3 // indirect
	github.com/json-iterator/go v1.1.9 // indirect
	github.com/leodido/go-urn v1.2.0 // indirect
	github.com/mattn/go-isatty v0.0.12 // indirect
	github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421 // indirect
	github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742 // indirect
	github.com/ugorji/go/codec v1.1.7 // indirect
	golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 // indirect
	golang.org/x/sys v0.0.0-20200116001909-b77594299b42 // indirect
	gopkg.in/yaml.v2 v2.2.8 // indirect
)

The dependencies for the application end up being larger than the application source code. And in each of these dependencies is the potential for a vulnerability that could be exploited by a determined adversary. Kelsey Hightower summed this up well, “this is software security in the real world”. Below is an example of a Java app that hides vulnerabile dependencies inside the frameworks that the application is built off of.

As much as we might want to put the genie back in the bottle, the productivity boosts of building on top of frameworks are too good to reverse this trend. Instead we have to look for different ways to manage security in this more complex world of software development.

If you’re looking for a solution to the complexity of modern software vulnerability management, be sure to take a look at the Anchore Enterprise platform and the included container vulnerability scanner.

The Solution: Modeling software supply chain security as a data pipeline

Software supply chain security is a meta problem of software development. The solution to most meta problems in software development is data pipeline management. 

Developers have learned this lesson before when they first build an application and something goes wrong. In order to solve the problem they have to create a log of the error. This is a great solution until you’ve written your first hundred logging statements. Suddenly your solution has become its own problem and a developer becomes buried under a mountain of logging data. This is where a logging (read: data) pipeline steps in. The pipeline manages the mountain of log data and helps developers sift the signal from the noise.

The same pattern emerges in software supply chain security. From the first run of a vulnerability scanner on almost any modern software, a developer will find themselves buried under a mountain of security metadata. 

$ grype dir:~/webinar-demo/examples/app:v2.0.0

 ✔ Vulnerability DB                [no update available]  
 ✔ Indexed file system                                                                            ~/webinar-demo/examples/app:v2.0.0
 ✔ Cataloged contents                                                         889d95358bbb68b88fb72e07ba33267b314b6da8c6be84d164d2ed425c80b9c3
   ├── ✔ Packages                        [16 packages]  
   └── ✔ Executables                     [0 executables]  
 ✔ Scanned for vulnerabilities     [11 vulnerability matches]  
   ├── by severity: 1 critical, 5 high, 5 medium, 0 low, 0 negligible
   └── by status:   11 fixed, 0 not-fixed, 0 ignored 

NAME                      INSTALLED                           FIXED-IN                           TYPE          VULNERABILITY        SEVERITY 
github.com/gin-gonic/gin  v1.7.2                              1.7.7                              go-module     GHSA-h395-qcrw-5vmq  High      
github.com/gin-gonic/gin  v1.7.2                              1.9.0                              go-module     GHSA-3vp4-m3rf-835h  Medium    
github.com/gin-gonic/gin  v1.7.2                              1.9.1                              go-module     GHSA-2c4m-59x9-fr2g  Medium    
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20211202192323-5770296d904e  go-module     GHSA-gwc9-m7rh-j2ww  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20220314234659-1baeb1ce4c0b  go-module     GHSA-8c26-wmh5-6g9v  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20201216223049-8b5274cf687f  go-module     GHSA-3vm4-22fp-5rfm  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.17.0                             go-module     GHSA-45x7-px36-x8w8  Medium    
golang.org/x/sys          v0.0.0-20200116001909-b77594299b42  0.0.0-20220412211240-33da011f77ad  go-module     GHSA-p782-xgp4-8hr8  Medium    
log4j-core                2.15.0                              2.16.0                             java-archive  GHSA-7rjr-3q55-vv33  Critical  
log4j-core                2.15.0                              2.17.0                             java-archive  GHSA-p6xc-xr62-6r2g  High      
log4j-core                2.15.0                              2.17.1                             java-archive  GHSA-8489-44mv-ggj8  Medium

All of this from a single innocuous include statements to your favorite application framework. 

Again the data pipeline comes to the rescue and helps manage the flood of security metadata. In this blog post we’ll step through the major functions of a data pipeline customized for solving the problem of software supply chain security.

Modeling SBOMs and vulnerability scans as unit tests

I like to think of security tools as just another test. A unit test might test the behavior of my code. I think this falls in the same quality bucket as linters to make sure you are following your company’s style guide. This is a way to make sure you are following your company’s security guide.

Kelsey Hightower

This idea from renowned developer, Kelsey Hightower is apt, particularly for software supply chain security. Tests are a mental model that developers utilize on a daily basis. Security tooling are functions that are run against your application in order to produce security data about your application rather than behavioral information like a unit test. The first two foundational functions of software supply chain security are being able to identify all of the software dependencies and to scan the dependencies for known existing vulnerabilities (i.e., ‘testing’ for vulnerabilities in an application). 

This is typically accomplished by running an SBOM generation tool like Syft to create an inventory of all dependencies followed by running a vulnerability scanner like Grype to compare the inventory of software components in the SBOM against a database of vulnerabilities. Going back to the data pipeline model, the SBOM and vulnerability database are the data sources and the vulnerability report is the transformed security metadata that will feed the rest of the pipeline.

$ grype dir:~/webinar-demo/examples/app:v2.0.0 -o json

 ✔ Vulnerability DB                [no update available]  
 ✔ Indexed file system                                                                            ~/webinar-demo/examples/app:v2.0.0
 ✔ Cataloged contents                                                         889d95358bbb68b88fb72e07ba33267b314b6da8c6be84d164d2ed425c80b9c3
   ├── ✔ Packages                        [16 packages]  
   └── ✔ Executables                     [0 executables]  
 ✔ Scanned for vulnerabilities     [11 vulnerability matches]  
   ├── by severity: 1 critical, 5 high, 5 medium, 0 low, 0 negligible
   └── by status:   11 fixed, 0 not-fixed, 0 ignored 

{
 "matches": [
  {
   "vulnerability": {
    "id": "GHSA-h395-qcrw-5vmq",
    "dataSource": "https://github.com/advisories/GHSA-h395-qcrw-5vmq",
    "namespace": "github:language:go",
    "severity": "High",
    "urls": [
     "https://github.com/advisories/GHSA-h395-qcrw-5vmq"
    ],
    "description": "Inconsistent Interpretation of HTTP Requests in github.com/gin-gonic/gin",
    "cvss": [
     {
      "version": "3.1",
      "vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:L/A:N",
      "metrics": {
       "baseScore": 7.1,
       "exploitabilityScore": 2.8,
       "impactScore": 4.2
      },
      "vendorMetadata": {
       "base_severity": "High",
       "status": "N/A"
      }
     }
    ],
    . . . 

This was previously done just prior to pushing an application to production as a release gate that would need to be passed before software could be shipped. As unit tests have moved forward in the software development lifecycle as DevOps principles have won the mindshare of the industry, so has security testing “shifted left” in the development cycle. With self-contained, open source CLI tooling like Syft and Grype, developers can now incorporate security testing into their development environment and test for vulnerabilities before even pushing a commit to a continuous integration (CI) server.

From a security perspective this is a huge win. Security vulnerabilities are caught earlier in the development process and fixed before they come up against a delivery due date. But with all of this new data being created, the problem of data overload has led to a different set of problems.

Vulnerability overload; Uncovering the signal in the noise

Like the world of application logs that came before it, at some point there is so much information that an automated process generates that finding the signal in the noise becomes its own problem.

How Anchore Enterprise manages SBOMs and vulnerability scans

Centralized management of SBOMs and vulnerability scans can end up being a massive headache. No need to come up with your own storage and data management solution. Just configure the AnchoreCTL CLI tool to automatically submit SBOMs and vulnerability scans as you run them locally. Anchore Enterprise stores all of this data for you.

On top of this, Anchore Enterprise offers data analysis tools so that you can search and filter SBOMs and vulnerability scans by version, build stage, vulnerability type, etc.

Combining local developer tooling with centralized data management creates a best of both worlds environment where engineers can still get their tasks done locally with ease but offload the arduous management tasks to a server.

Added benefit, SBOM drift detection

Another benefit of distributed SBOM generation and vulnerability scanning is that this security check can be run at each stage of the build process. It would be ideal to believe that the software that is written on in a developers local environment always makes it through to production in an untouched, pristine state but this is rarely the case.

Running SBOM generation and vulnerability scanning at development, on the build server, in the artifact registry, at deploy and during runtime will create a full picture of where and when software is modified in the development process and simplify post-incident investigations or even better catch issues well before they make it to a production environment.

This historical record is a feature provided by Anchore Enterprise called Drift Detection. In the same way that an HTTP cookie creates state between individual HTTP requests, Drift detection is security metadata about security metadata (recursion, much?) that allows state to be created between each stage of the build pipeline.  Being the central store for all of the associated security metadata makes the Anchore Enterprise platform the ideal location to aggregate and scan for these particular anomalies.

Policy as a lever

Being able to filter through all of the noise created by integrating security checks across the software development process creates massive leverage when searching for a particular issue but it is still a manual process and being a full-time investigator isn’t part of the software engineer job description. Wouldn’t it be great if we could automate some if not the majority of these investigations?

I’m glad we’re of like minds because this is where policy comes into picture. Returning to Kelsey Hightower’s original comparison between security tools as linters, policy is the security guide that is codified by your security team that will allow you to quickly check whether the commit that you put together will meet the standards for secure software.

By running these checks and automating the feedback, developers can quickly receive feedback on any potential security issues discovered in their commit. This allows developers to polish their code before it is flagged by the security check in the CI server and potentially failed. No more waiting on the security team to review your commit before it can proceed to the next stage. Developers are empowered to solve the security risks and feel confident that their code won’t be held up downstream.

Policies-as-code supports existing developer workflows

Anchore Enterprise designed its policy engine to ingest the individual policies as JSON objects that can be integrated directly into the existing software development tooling. Create a policy in the UI or CLI, generate the JSON and commit it directly to the repo.

This prevents the painful context switching of moving between different interfaces for developers and allows engineering and security teams to easily reap the rewards of version control and rollbacks that come pre-baked into tools like version control. Anchore Enterprise was designed by engineers for engineers which made policy-as-code the obvious choice when designing the platform.

Remediation automation integrated into the development workflow

Being able to be alerted when a commit is violating your company’s security guidelines is better than pushing insecure code and waiting for the breach to find out that you forgot to sanitize the user input. Even after you get alerted to a problem, you still need to understand what is insecure and how to fix it. This can be done by trying to Google the issue or starting up a conversation with your security team. But this just ends up creating more work for you before you can get your commit into the build pipeline. What if you could get the answer to how to fix your commit in order to make it secure directly into your normal workflow?

Anchore Enterprise provides remediation recommendations to help create actionable advice on how to resolve security alerts that are flagged by a policy. This helps point developers in the right direction so that they can resolve their vulnerabilities quickly and easily without the manual back and forth of opening a ticket with the security team or Googling aimlessly to find the correct solution. The recommendations can be integrated directly into GitHub Issues or Jira tickets in order to blend seamlessly into the workflows that teams depend on to coordinate work across the organization.

Wrap-Up

From the perspective of a developer it can sometimes feel like the security team is primarily a frustration that only slows down your ability to ship code. Anchore has internalized this feedback and has built a platform that allows developers to still move at DevOps speeds and do so while producing high quality, secure code. By integrating directly into developer workflows (e.g., CLI tooling, CI/CD integrations, source code repository integrations, etc.) and providing actionable feedback Anchore Enterprise removes the traditional roadblock mentality that has typically described the relationship between development and security.

If you’re interested to see all of the features described in this blog post via a hands-on demo, check out the webinar by clicking on the screenshot below and going to the workshop hosted on GitHub.

If you’re looking to go further in-depth with how to build and secure containers in the software supply chain, be sure to read our white paper: The Fundamentals of Container Security.

The post Modeling Software Security as Unit Tests: A Mental Model for Developers appeared first on Anchore.

]]>
https://anchore.com/blog/modeling-software-security-as-unit-tests-a-mental-model-for-developers/feed/ 0
Upstream – a Tidelift expedition https://anchore.com/events/upstream-a-tidelift-expedition/ https://anchore.com/events/upstream-a-tidelift-expedition/#respond Mon, 29 Apr 2024 22:45:06 +0000 https://anchore.com/?p=987473716 The post Upstream – a Tidelift expedition appeared first on Anchore.

]]>
The post Upstream – a Tidelift expedition appeared first on Anchore.

]]>
https://anchore.com/events/upstream-a-tidelift-expedition/feed/ 0
Adapting to the new normal at NVD with Anchore Vulnerability Feed https://anchore.com/webinars/adapting-to-the-new-normal-at-nvd-with-anchore-vulnerability-feed/ https://anchore.com/webinars/adapting-to-the-new-normal-at-nvd-with-anchore-vulnerability-feed/#respond Wed, 24 Apr 2024 21:43:13 +0000 https://anchore.com/?p=987473686 The post Adapting to the new normal at NVD with Anchore Vulnerability Feed appeared first on Anchore.

]]>
The post Adapting to the new normal at NVD with Anchore Vulnerability Feed appeared first on Anchore.

]]>
https://anchore.com/webinars/adapting-to-the-new-normal-at-nvd-with-anchore-vulnerability-feed/feed/ 0
SSDF Attestation 101: A Practical Guide for Software Producers https://anchore.com/white-papers/ssdf-attestation-101-a-practical-guide-for-software-producers/ https://anchore.com/white-papers/ssdf-attestation-101-a-practical-guide-for-software-producers/#respond Wed, 24 Apr 2024 21:27:08 +0000 https://anchore.com/?p=987473683 This ebook sheds light on the recently increased security requirements by the US government and helps companies understand the 4 main requirements for SSDF Attestation.

The post SSDF Attestation 101: A Practical Guide for Software Producers appeared first on Anchore.

]]>
This ebook sheds light on the recently increased security requirements by the US government and helps companies understand the 4 main requirements for SSDF Attestation.

The post SSDF Attestation 101: A Practical Guide for Software Producers appeared first on Anchore.

]]>
https://anchore.com/white-papers/ssdf-attestation-101-a-practical-guide-for-software-producers/feed/ 0
AWS Summit https://anchore.com/events/aws-summit/ https://anchore.com/events/aws-summit/#respond Wed, 24 Apr 2024 20:44:59 +0000 https://anchore.com/?p=987473680 The post AWS Summit appeared first on Anchore.

]]>
The post AWS Summit appeared first on Anchore.

]]>
https://anchore.com/events/aws-summit/feed/ 0