Published on: July 7, 2025

14 min read

CI/CD inputs: Secure and preferred method to pass parameters to a pipeline

Learn how CI/CD inputs provide type-safe parameter passing with validation, replacing error-prone variables for more reliable pipelines.

GitLab CI/CD inputs represent the future of pipeline parameter passing. As a purpose-built feature designed specifically for typed parameters with validation, clear contracts, and enhanced security, inputs solve the fundamental challenges that teams have been working around with variables for years.

While CI/CD variables have served as the traditional method for passing parameters to pipelines, they were originally designed for storing configuration settings — not as a sophisticated parameter-passing mechanism for complex workflows. This fundamental mismatch has created reliability issues, security concerns, and maintenance overhead that inputs elegantly eliminate.

This article demonstrates why CI/CD inputs should be your preferred approach for pipeline parameters. You'll discover how inputs provide type safety, prevent common pipeline failures, eliminate variable collision issues, and create more maintainable automation. You'll also see practical examples of inputs in action and how they solve real-world challenges, which we hope will encourage you to transition from variable-based workarounds to input-powered reliability.

The hidden costs of variable-based parameter passing

The problems with using variables for parameter passing are numerous and frustrating.

No type validation

Variables are strings. There is no type validation, meaning a pipeline expecting a boolean or a number, but accidentally receives a string. This leads to unexpected failures deep into the pipeline execution. In the case of a deployment workflow for example, hours after it was started a critical production deployment fails because a boolean check in a variable was not passed as expected.

Runtime mutability

Variables can be modified throughout the pipeline runtime, creating unpredictable behavior when multiple jobs attempt to change the same values. For example, deploy_job_a sets DEPLOY_ENV=staging, but deploy_job_b changes the DEPLOY_ENV value to production.

Security risks

Security concerns arise because variables intended as simple parameters often receive the same access permissions as sensitive secrets. There's no clear contract defining what parameters a pipeline expects, their types, or their default values. A simple BUILD_TYPE parameter, that seems innocuous at first glance, suddenly has access to production secrets simply because variables do not inherently distinguish between parameters and sensitive data.

Perhaps most problematically, error detection happens too late in the process. A misconfigured variable might not cause a failure until minutes or even hours into a pipeline run, wasting valuable CI/CD resources and developer time. Teams have developed elaborate workarounds such as custom validation scripts, extensive documentation, and complex naming conventions just to make variable-based parameter passing somewhat reliable.

Many users have requested local debugging capabilities to test pipeline configurations before deployment. While this seems like an obvious solution, it quickly breaks down in practice. Enterprise CI/CD workflows integrate with dozens of external systems — cloud providers, artifact repositories, security scanners, deployment targets — that simply can't be replicated locally. Even if they could, the complexity would make local testing environments nearly impossible to maintain. This mismatch forced us to reframe the problem entirely. Instead of asking "How can we test pipelines locally?" we started asking "How can we prevent configuration issues caused by variable-based parameter passing before users run a CI/CD automation workflow?"

Understanding variable precedence

GitLab's variable system includes multiple precedence levels to provide flexibility for different use cases. While this system serves many valid scenarios like allowing administrators to set instance- or group-wide defaults while letting individual projects override them when needed, it can create challenges when building reusable pipeline components.

When creating components or templates that will be used across different projects and groups, the variable precedence hierarchy can make behavior less predictable. For example, a template that works perfectly in one project might behave differently in another due to group- or instance-level variable overrides that aren't visible in a pipeline configuration.

When including multiple templates, it also can be challenging to track which variables are being set where and how they might interact.

In addition, components authors need to document not just what variables their template uses, but also potential conflicts with variables that might be defined at higher precedence levels.

Variable precedence examples

Main pipeline file (.gitlab-ci.yml):


variables:
  ENVIRONMENT: production  # Top-level default for all jobs
  DATABASE_URL: prod-db.example.com

include:
  - local: 'templates/test-template.yml'
  - local: 'templates/deploy-template.yml'

Test template (templates/test-template.yml):


run-tests:
  variables:
    ENVIRONMENT: test  # Job-level variable overrides the default
  script:
    - echo "Running tests in $ENVIRONMENT environment"  
    - echo "Database URL is $DATABASE_URL"  # Still inherits prod-db.example.com!
    - run-integration-tests --env=$ENVIRONMENT --db=$DATABASE_URL
    `# Issue: Tests run in "test" environment but against production database`

Deploy template (templates/deploy-template.yml):


deploy-app:
  script:
    - echo "Deploying to $ENVIRONMENT"  # Uses production (top-level default)
    - echo "Database URL is $DATABASE_URL"  # Uses prod-db.example.com
    - deploy --target=$ENVIRONMENT --db=$DATABASE_URL
    # This will deploy to production as intended

The challenges in this example:

  1. Partial inheritance: The test job gets ENVIRONMENT=test but still inherits DATABASE_URL=prod-db.example.com.

  2. Coordination complexity: Template authors must know what top-level variables exist and might conflict.

  3. Override behavior: Job-level variables with the same name override defaults, but this isn't always obvious.

  4. Hidden dependencies: Templates become dependent on the main pipeline's variable names.

GitLab recognized these pain points and introduced CI/CD inputs as a purpose-built solution for passing parameters to pipelines, offering typed parameters with built-in validation that occurs at pipeline creation time rather than during execution.

CI/CD inputs fundamentals

Inputs provide typed parameters for reusable pipeline configuration with built-in validation at pipeline creation time, designed specifically for defining values when the pipeline runs. They create a clear contract between the pipeline consumer and the configuration, explicitly defining what parameters are expected, their types, and constraints.

Configuration flexibility and scope

One of the advantages of inputs is their configuration-time flexibility. Inputs are evaluated and interpolated during pipeline creation using the interpolation format $[[ inputs.input-id ]], meaning they can be used anywhere in your pipeline configuration — including job names, rules conditions, images, and any other YAML configuration element. This eliminates the long-standing limitation of variable interpolation in certain contexts.

One common use case we've seen is that users define their job names like test-$[[ inputs.environment ]]-deployment.

When using inputs in job names, you can prevent naming conflicts when the same component is included multiple times in a single pipeline. Without this capability, including the same component twice would result in job name collisions, with the second inclusion overwriting the first. Input-based job names ensure each inclusion creates uniquely named jobs.

Before inputs:


test-service:
  variables:
    SERVICE_NAME: auth-service
    ENVIRONMENT: staging
  script:
    - run-tests-for $SERVICE_NAME in $ENVIRONMENT

With inputs:


spec:
  inputs:
    environment:
      type: string
    service_name:
      type: string

test-$[[ inputs.service_name ]]-$[[ inputs.environment ]]:
  script:
    - run-tests-for $[[ inputs.service_name ]] in $[[ inputs.environment ]]

When included multiple times with different inputs, this creates jobs like test-auth-service-staging, test-payment-service-production, and test-notification-service-development. Each job has a unique, meaningful name that clearly indicates its purpose, making pipeline visualization much clearer than having multiple jobs with identical names that would overwrite each other.

Now let's go back to the first example in the top of this blog and use inputs, one immediate benefit is that instead of maintaining multiple templates file we can use one reusable template with different input values:


spec:
  inputs:
    environment:
      type: string
    database_url:
      type: string
    action:
      type: string
---

$[[ inputs.action ]]-$[[ inputs.environment ]]:
  script:
    - echo "Running $[[ inputs.action ]] in $[[ inputs.environment ]] environment"
    - echo "Database URL is $[[ inputs.database_url ]]"
    - run-$[[ inputs.action ]] --env=$[[ inputs.environment ]] --db=$[[ inputs.database_url ]]

And in the main gitlab-ci.yml file we can include it twice (or more) with different values, making sure we avoid naming collisions


include:
  - local: 'templates/environment-template.yml'
    inputs:
      environment: test
      database_url: test-db.example.com
      action: tests
  - local: 'templates/environment-template.yml'
    inputs:
      environment: production
      database_url: prod-db.example.com
      action: deploy

The result: Instead of maintaining separate YAML files for testing and deployment jobs, you now have a single reusable template that handles both use cases safely. This approach scales to any number of environments or job types — reducing maintenance overhead, eliminating code duplication, and ensuring consistency across your entire pipeline configuration. One template to maintain instead of many, with zero risk of variable collision or configuration drift.

Validation and type safety

Another key difference between variables and inputs lies in validation capabilities. Inputs support different value types, including strings, numbers, booleans, and arrays, with validation occurring immediately when the pipeline is created. If you define an input as a boolean but pass a string, GitLab will reject the pipeline before any jobs execute, saving time and resources.

Here is an example of the enormous benefit of type validation.

Without type validation (variables):


variables:
  ENABLE_TESTS: "true"  # Always a string
  MAX_RETRIES: "3"      # Always a string

deploy_job:
  script:
    - if [ "$ENABLE_TESTS" = true ]; then  # This fails!
        echo "Running tests"
      fi
    - retry_count=$((MAX_RETRIES + 1))      # String concatenation: "31"

Problem: The boolean check fails because “true” (string) is not equal to true, (boolean).

With type validation (inputs):


spec:
  inputs:
    enable_tests:
      type: boolean
      default: true
    max_retries:
      type: number
      default: 3

      
deploy_job:
  script:
    - if [ "$[[ inputs.enable_tests ]]" = true ]; then  # Works correctly
        echo "Running tests"
      fi
    - retry_count=$(($[[ inputs.max_retries ]] + 1))    # Math works: 4

Real-world impact for variable type validation failure: A developer or a process triggers a GitLab CI/CD pipeline with ENABLE_TESTS = yes instead of true. Assuming it takes on average 30 minutes before the deployment job starts, then finally when this job kicks off, 30 minutes or longer into the pipeline run, the deployment script tries to evaluate the boolean and fails.

Imagine the impact in terms of time-to-market and, of course. developer time trying to debug why a seemingly basic deploy job failed.

With type inputs, GitLab CI/CD will immediately throw an error and provide an explicit error message regarding the type mismatch.

Security and access control

Inputs provide enhanced security through controlled parameter passing with explicit contracts that define exactly what values are expected and allowed, creating clear boundaries between parameter passing to the pipeline, In addition. inputs are immutable. Once the pipeline starts, they cannot be modified during execution, providing predictable behavior throughout the pipeline lifecycle and eliminating the security risks that come from runtime variable manipulation.

Scope and lifecycle

When you define variables using the variables: keyword at the top level of your .gitlab-ci.yml file, these variables become defaults for all jobs in your entire pipeline. When you include templates, you must consider what variables you've defined globally, as they can interact with the template's expected behavior through GitLab's variable precedence order.

Inputs are defined in CI configuration files (e.g. components or templates) and assigned values when a pipeline is triggered, allowing you to customize reusable CI configurations. They exist solely for pipeline creation and configuration time, scoped to the CI configuration file where they're defined, and become immutable references once the pipeline begins execution. Since each component maintains its own inputs, there is no risk of inputs interfering with other components or templates in your pipeline, eliminating variable collision and override issues that can occur with variable-based approaches.

Working with variables and inputs together

We recognize that teams have extensive investments in their variable-based workflows, and migration to inputs doesn't happen overnight. That's why we've developed capabilities that allow inputs and variables to work seamlessly together, providing a bridge between existing variables and the benefits of inputs while overcoming some key challenges in variable expansion.

Let's look at this real-world example.

Variable expansion in rules conditions

A common challenge occurs when using variables that contain other variable references in rules:if conditions. GitLab only expands variables one level deep during rule evaluation, which can lead to unexpected behavior:

# This doesn't work as expected

variables:
  TARGET_ENV:
    value: "${CI_COMMIT_REF_SLUG}"

deploy-job:
  rules:
    - if: '$TARGET_ENV == "production"'  # Compares "${CI_COMMIT_REF_SLUG}" != "production"
      variables:
        DEPLOY_MODE: "blue-green"

The expand_vars function solves this by forcing proper variable expansion in inputs:

spec:
  inputs:
    target_environment:
      description: "Target deployment environment"
      default: "${CI_COMMIT_REF_SLUG}"
---


deploy-job:
  rules:
    - if: '"$[[ inputs.target_environment | expand_vars ]]" == "production"'
      variables:
        DEPLOY_MODE: "blue-green"
        APPROVAL_REQUIRED: "true"
    - when: always
      variables:
        DEPLOY_MODE: "rolling"
        APPROVAL_REQUIRED: "false"
  script:
    - echo "Target: $[[ inputs.target_environment | expand_vars ]]"
    - echo "Deploy mode: ${DEPLOY_MODE}"

Why this matters

Without expand_vars, rule conditions evaluate against the literal variable reference (like "${CI_COMMIT_REF_SLUG}") rather than the expanded value (like "production"). This leads to rules that never match when you expect them to, breaking conditional pipeline logic.

Important notes about expand_vars:

  • Only variables that can be used with the include keyword are supported

  • Variables must be unmasked (not marked as protected/masked)

  • Nested variable expansion is not supported

  • Rule conditions using expand_vars must be properly quoted: '"$[[ inputs.name | expand_vars ]]" == "value"'

This pattern solves the single-level variable expansion limitation, working for any conditional logic that requires comparing fully resolved variable values.

Function chaining for advanced processing

Along with expand_vars, you can use functions like truncate to shorten values for compliance with naming restrictions (such as Kubernetes resource names), creating sophisticated parameter processing pipelines while maintaining input safety and predictability.


spec:  
  inputs:
    service_identifier:
      default: 'service-$CI_PROJECT_NAME-$CI_COMMIT_REF_SLUG'
---

create-resource:
  script:
    - resource_name=$[[ inputs.service_identifier | expand_vars | truncate(0,50) ]]

This integration capability allows you to adopt inputs gradually while leveraging your existing variable infrastructure, making the migration path much smoother.

From components only to CI pipelines

Up until GitLab 17.11, GitLab users were able to use inputs only in components and templates through the include: syntax. This limited their use to reusable CI/CD configurations, but didn't address the broader need for dynamic pipeline customization.

Pipeline-wide inputs support

Starting with GitLab 17.11, GitLab users can now use inputs to safely modify pipeline behavior across all pipeline execution contexts, replacing the traditional reliance on pipeline variables. This expanded support includes:

  • Scheduled pipelines: Define inputs with defaults for automated pipeline runs while allowing manual override when needed.

  • Downstream pipelines: Pass structured inputs to child and multi-project pipelines with proper validation and type safety.

  • Manual pipelines: Present users with a clean, validated form interface.

Those enhancements, with more to follow, allow teams to modernize their pipelines while maintaining backward compatibility gradually. Once inputs are fully adopted, users can disable pipeline variables to ensures a more secure and predictable CI/CD environment.

Summary

The transition from variables to inputs represents more than just a technical upgrade — it's a shift toward more maintainable, predictable, and secure CI/CD pipelines. While variables continue to serve important purposes for configuration, inputs provide the parameter-passing capabilities that teams have been working around for years.

We understand that variables are deeply embedded in existing workflows, which is why we've built bridges between the two systems. The expand_vars function and other input capabilities allow you to adopt inputs gradually while leveraging your existing variable infrastructure.

By starting with new components and templates, then gradually migrating high-impact workflows, you'll quickly see the benefits of clearer contracts, earlier error detection, and more reliable automation that scales across your organization. Additionally, moving to inputs creates an excellent foundation for leveraging GitLab's CI/CD Catalog, where reusable components with typed interfaces become powerful building blocks for your DevOps workflows but more on that in our next blog post.

Your future self and your teammates will thank you for the clarity and reliability that inputs bring to your CI/CD workflows, while still being able to work with the variable systems you've already invested in.

What's next

Looking ahead, we're expanding inputs to solve two key challenges: enhancing pipeline triggering with cascading options that dynamically adjust based on user selections, and providing job-level inputs that allow users to retry individual jobs with different parameter values. We encourage you to follow these discussions, share your feedback, and contribute to shaping these features. You can also provide general feedback on CI/CD inputs through our feedback issue.

Read more

We want to hear from you

Enjoyed reading this blog post or have questions or feedback? Share your thoughts by creating a new topic in the GitLab community forum.
Share your feedback

50%+ of the Fortune 100 trust GitLab

Start shipping better software faster

See what your team can do with the intelligent

DevSecOps platform.