Posts Tagged


Containerizing Your Code: Docker and Kubeflow Pipelines

Containerizing Your Code: Docker and Kubeflow Pipelines

Kubeflow Pipelines allows you to build, deploy, and manage end-to-end machine learning workflows. In order to use custom code in your pipeline, you need to containerize it using Docker. This ensures that your code can be easily deployed, scaled, and managed by Kubernetes, which is the underlying infrastructure for Kubeflow. In this tutorial, we will guide you through containerizing your Python code using Docker and integrating it into a Kubeflow Pipeline.


  1. Familiarity with Python programming
  2. Kubeflow Pipelines installed and set up (follow our previous tutorial, “Setting up Kubeflow Pipelines: A Step-by-Step Guide”)

Step 1: Write Your Python Script

Create a new Python script (e.g., containing the following code:

import sys

def process_data(input_data):
    return input_data.upper()
if __name__ == "__main__":
    input_data = sys.argv[1]
    processed_data = process_data(input_data)
    print(f"Processed data: {processed_data}")

This script takes an input string as a command-line argument, converts it to uppercase, and prints the result.

Step 2: Create a Dockerfile

Create a new file named Dockerfile in the same directory as your Python script, and add the following content:

FROM python:3.7

COPY /app
ENTRYPOINT ["python", ""]

This Dockerfile specifies that the base image is python:3.7, sets the working directory to /app, copies the Python script into the container, and sets the entry point to execute the script when the container is run.

Step 3: Build the Docker Image

Open a terminal or command prompt, navigate to the directory containing the Dockerfile and Python script, and run the following command to build the Docker image:

docker build -t your_username/data_processing:latest .

Replace your_username with your Docker Hub username or another identifier. This command builds a Docker image with the specified tag and the current directory as the build context.

Step 4: Test the Docker Image

Test the Docker image by running the following command:

docker run --rm your_username/data_processing:latest "hello world"

This should output:

Processed data: HELLO WORLD

Step 5: Push the Docker Image to a Container Registry

To use the Docker image in a Kubeflow Pipeline, you need to push it to a container registry, such as Docker Hub, Google Container Registry, or Amazon Elastic Container Registry. In this tutorial, we will use Docker Hub.

First, log in to Docker Hub using the command line:

docker login

Enter your Docker Hub username and password when prompted.

Next, push the Docker image to Docker Hub:

docker push your_username/data_processing:latest

Step 6: Create a Kubeflow Pipeline using the Docker Image

Now that the Docker image is available in a container registry, you can use it in a Kubeflow Pipeline. Create a new Python script (e.g., and add the following code:

import kfp
from kfp import dsl

def data_processing_op(input_data: str):
    return dsl.ContainerOp(
        name="Data Processing",
    name="Custom Pipeline",
    description="A pipeline that uses a custom Docker image for data processing."
def custom_pipeline(input_data: str = "hello world"):
    data_processing = data_processing_op(input_data)
if __name__ == "__main__":
    kfp.compiler.Compiler().compile(custom_pipeline, "custom_pipeline.yaml")

This Python script defines a pipeline with a single step that uses the custom Docker image we created earlier. The data_processing_op function takes an input string and returns a ContainerOp object with the specified Docker image and input data.

Step 7: Upload and Run the Pipeline

  1. Click on the “Pipelines” tab in the left-hand sidebar.
  2. Click the “Upload pipeline” button in the upper right corner.
  3. In the “Upload pipeline” dialog, click “Browse” and select the custom_pipeline.yaml file generated in the previous step.
  4. Click “Upload” to upload the pipeline to the Kubeflow platform.
  5. Once the pipeline is uploaded, click on its name to open the pipeline details page.
  6. Click the “Create run” button to start a new run of the pipeline.
  7. On the “Create run” page, you can give your run a name and choose a pipeline version. Click “Start” to begin the pipeline run.

Step 8: Monitor the Pipeline Run

After starting the pipeline run, you will be redirected to the “Run details” page. Here, you can monitor the progress of your pipeline, view the logs for each step, and inspect the output artifacts.

  1. To view the logs for a specific step, click on the step in the pipeline graph and then click the “Logs” tab in the right-hand pane.
  2. To view the output artifacts, click on the step in the pipeline graph and then click the “Artifacts” tab in the right-hand pane.

Congratulations! You have successfully containerized your Python code using Docker and integrated it into a Kubeflow Pipeline. You can now leverage the power of containerization to build more complex pipelines with custom code, ensuring that your machine learning workflows are scalable, portable, and easily maintainable.

In this tutorial, we walked you through the process of containerizing your Python code using Docker and integrating it into a Kubeflow Pipeline. By using containers, you can ensure that your custom code is easily deployable, maintainable, and scalable across different environments. As you continue to work with Kubeflow Pipelines, you can explore more advanced features, build more sophisticated pipelines, and optimize your machine learning workflows.