All Products
Search
Document Center

Simple Log Service:Collect text logs from an ACK cluster in DaemonSet mode

Last Updated:May 19, 2025

Simple Log Service allows you to install Logtail in DaemonSet or Sidecar mode and use Logtail to collect text logs from a Kubernetes cluster. For more information about the differences between the modes, see Install Logtail to collect logs from a Kubernetes cluster. This topic describes how to install Logtail in DaemonSet mode and use Logtail to collect text logs from an Alibaba Cloud Container Service for Kubernetes (ACK) cluster.

Prerequisites

Simple Log Service is activated. For more information, see Activate Simple Log Service.

Usage notes

This topic applies to ACK managed clusters and ACK dedicated clusters.

Solution overview

You can perform the following steps to install Logtail in DaemonSet mode and use Logtail to collect text logs from an ACK cluster:

  1. Install Logtail components: Install Logtail components in your ACK cluster. The Logtail components include DaemonSet logtail-ds, ConfigMap alibaba-log-configuration, and Deployment alibaba-log-controller. After Logtail is installed, Simple Log Service can deliver a Logtail configuration to Logtail and use Logtail to collect logs from the ACK cluster.

  2. Create a Logtail configuration: After a Logtail configuration is created, Logtail collects incremental logs based on the Logtail configuration, and processes and uploads the collected logs to the Logstore that you create. You can create a Logtail configuration by using CRD - AliyunPipelineConfig, CRD - AliyunLogConfig, or environment variables, or in the Simple Log Service console. CRD - AliyunPipelineConfig is recommended.

  3. Query and analyze logs: After a Logtail configuration is created, Simple Log Service automatically creates a Logstore to store the collected logs. You can view the logs in the Logstore.

Step 1: Install Logtail components

Install Logtail components in an existing ACK cluster

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the one you want to manage and click its name. In the left-side navigation pane, choose Operations > Add-ons.

  3. On the Logs and Monitoring tab of the Add-ons page, find the logtail-ds component and click Install.

Install Logtail components when you create an ACK cluster

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click Create Kubernetes Cluster. In the Component Configurations step of the wizard, select Enable Log Service.

    This topic describes only the settings related to Simple Log Service. For more information about other settings, see Create an ACK managed cluster.

    After you select Enable Log Service, the system prompts you to create a Simple Log Service project. You can use one of the following methods to create a project:

    • Select Project

      You can select an existing project to manage the collected container logs.

      安装logtail组件

    • Create Project

      Simple Log Service automatically creates a project to manage the collected container logs. ClusterID indicates the unique identifier of the created Kubernetes cluster.

      安装logtail组件

Important

In the Component Configurations step of the wizard, Enable is selected for the Control Plane Component Logs parameter by default. If Enable is selected, the system automatically configures collection settings and collects logs from the control plane components of a cluster, and you are charged for the collected logs based on the pay-as-you-go billing method. You can determine whether to select Enable based on your business requirements. For more information, see Collect logs of control plane components in ACK managed clusters.image

After the Logtail components are installed, Simple Log Service automatically generates a project named k8s-log-<YOUR_CLUSTER_ID> and resources in the project. You can log on to the Simple Log Service console to view the resources. The following table describes the resources.

Resource type

Resource name

Description

Example

Machine group

k8s-group-<YOUR_CLUSTER_ID>

The machine group of logtail-daemonset, which is used in log collection scenarios.

k8s-group-my-cluster-123

k8s-group-<YOUR_CLUSTER_ID>-statefulset

The machine group of logtail-statefulset, which is used in metric collection scenarios.

k8s-group-my-cluster-123-statefulset

k8s-group-<YOUR_CLUSTER_ID>-singleton

The machine group of a single instance, which is used to create a Logtail configuration for the single instance.

k8s-group-my-cluster-123-singleton

Logstore

config-operation-log

The Logstore is used to store logs of the alibaba-log-controller component. We recommend that you do not create a Logtail configuration for the Logstore. You can delete the Logstore. After the Logstore is deleted, the system no longer collects the operational logs of the alibaba-log-controller component. You are charged for the Logstore in the same manner as you are charged for regular Logstores. For more information, see Billable items of pay-by-ingested-data.

None

Step 2: Create a Logtail configuration

The following table describes the methods that you can use to create a Logtail configuration. We recommend that you use only one method to manage a Logtail configuration.

Configuration method

Configuration description

Scenario

CRD - AliyunPipelineConfig (recommended)

You can use the AliyunPipelineConfig Custom Resource Definition (CRD), which is a Kubernetes CRD, to manage a Logtail configuration.

This method is suitable for scenarios that require complex collection and processing, and version consistency between the Logtail configuration and the Logtail container in an ACK cluster.

Note

The logtail-ds component installed on an ACK cluster must be later than V1.8.10. For more information about how to update Logtail, see Update Logtail to the latest version.

Simple Log Service console

You can manage a Logtail configuration in the GUI based on quick deployment and configuration.

This method is suitable for scenarios in which simple settings are required to manage a Logtail configuration. If you use this method to manage a Logtail configuration, specific advanced features and custom settings cannot be used.

Environment variable

You can use environment variables to configure parameters used to manage a Logtail configuration in an efficient manner.

You can use environment variables only to configure simple settings. Complex processing logic is not supported. Only single-line text logs are supported. You can use environment variables to create a Logtail configuration that can meet the following requirements:

  • Collect data from multiple applications to the same Logstore.

  • Collect data from multiple applications to different projects.

CRD - AliyunLogConfig

You can use the AliyunLogConfig CRD, which is an old version CRD, to manage a Logtail configuration.

This method is suitable for known scenarios in which you can use the old version CRD to manage Logtail configurations.

You must gradually replace the AliyunLogConfig CRD with the AliyunPipelineConfig CRD to obtain better extensibility and stability. For more information about the differences between the two CRDs, see CRDs.

CRD - AliyunPipelineConfig (recommended)

To create a Logtail configuration, you need to only create a Custom Resource (CR) from the AliyunPipelineConfig CRD. After the CR is created, the Logtail configuration takes effect.

Important

If you create a Logtail configuration by creating a CR and you want to modify the Logtail configuration, you can only modify the CR. If you modify the Logtail configuration in the Simple Log Service console, the new settings are not synchronized to the CR.

  1. Log on to the ACK console.

  2. In the left-side navigation pane, click Clusters.

  3. On the Clusters page, find the cluster that you want to manage and click More in the Actions column. In the drop-down list that appears, click Manage ACK clusters.

  4. Create a file named example-k8s-file.yaml.

    You can use the Logtail configuration generator to generate a YAML script used to create a Logtail configuration for your scenario. For more information, see Logtail configuration generator. Alternatively, you can manually write a YAML script based on the following example.

    The following code provides an example of a YAML file used to collect text logs from the test.LOG file in the /data/logs/app_1 directory of pods labeled with app: ^(.*test.*)$ in the default namespace in multi-line mode to the automatically created k8s-file Logstore in the k8s-log-test project. You can modify the following parameters in the YAML file based on your business requirements:

    1. project: Example: k8s-log-test.

      Log on to the Simple Log Service console. Check the name of the project generated after Logtail is installed. In most cases, the project name is in the k8s-log-<YOUR_CLUSTER_ID> format.

    2. IncludeK8sLabel: the label used to filter pods. Example: app: ^(.*test.*)$. In this example, logs in the pods whose label key is app and label value contains test are collected.

      Note

      If you want to collect logs in pods whose names contain test in your cluster, you can replace the IncludeK8sLabel parameter with the K8sContainerRegex parameter and use wildcard characters to specify a value for the K8sContainerRegex parameter. Example: K8sContainerRegex: ^(.test.)$.

    3. FilePaths: Example: /data/logs/app_1/**/test.LOG. For more information, see File path mapping for containers.

    4. Endpoint and Region: Example for the Endpoint parameter: cn-hangzhou.log.aliyuncs.com. Example for the Region parameter: cn-hangzhou.

    The value of the config parameter includes the types of input, output, and processing plug-ins and container filtering methods. For more information, see PipelineConfig. For more information about the complete parameters in the YAML file, see CR parameters.

    apiVersion: telemetry.alibabacloud.com/v1alpha1
    kind: ClusterAliyunPipelineConfig
    metadata:
      # Specify the name of the resource. The name must be unique in the current Kubernetes cluster. The name is the same as the name of the created Logtail configuration. If the resource name already exists, the name does not take effect.
      name: example-k8s-file
    spec:
      # Specify the name of the project.
      project:
        name: k8s-log-test
      logstores:
        # Create a Logstore named k8s-file.
        - name: k8s-file
      # Create a Logtail configuration.
      config:
        # Enter a sample log. You can leave this parameter empty.
        sample: |
          2024-06-19 16:35:00 INFO test log
          line-1
          line-2
          end
        # Specify the input plug-in.
        inputs:
          # Use the input_file plug-in to collect multi-line text logs from containers.
          - Type: input_file
            # Specify the file path in the containers.
            FilePaths:
              - /data/logs/app_1/**/test.LOG
            # Enable the container discovery feature.
            EnableContainerDiscovery: true
            # Add conditions to filter containers. Multiple conditions are evaluated by using a logical AND.
            CollectingContainersMeta: true
            ContainerFilters:
              # Specify the namespace of the pods to which the required containers belong. Regular expression matching is supported.
              K8sNamespaceRegex: default
              # Specify the name of the required containers. Regular expression matching is supported.
              IncludeK8sLabel:
                app: ^(.*app.*)$
            # Enable multi-line log collection. If you want to collect single-line logs, delete this parameter.
            Multiline:
              # Specify the custom mode to match the beginning of the first line of a log based on a regular expression.
              Mode: custom
              # Specify the regular expression that is used to match the beginning of the first line of a log.
              StartPattern: '\d+-\d+-\d+\s\d+:\d+:\d+'
        # Specify the processing plug-in.
        processors:
          # Use the processor_parse_regex_native plug-in to parse logs based on the specified regular expression.
          - Type: processor_parse_regex_native
            # Specify the name of the original field.
            SourceKey: content
            # Specify the regular expression that is used for parsing. Use capturing groups to extract fields.
            Regex: (\d+-\d+-\d+\s\S+)(.*)
            # Specify the fields that you want to extract.
            Keys: ["time", "detail"]
        # Specify the output plug-in.
        flushers:
          # Use the flusher_sls plug-in to deliver logs to a specific Logstore.
          - Type: flusher_sls
            # Make sure that the Logstore exists.
            Logstore: k8s-file
            # Make sure that the endpoint is valid.
            Endpoint: cn-beijing.log.aliyuncs.com
            Region: cn-beijing
            TelemetryType: logs
  5. Run the kubectl apply -f example-k8s-file.yaml command. Then, Logtail starts to collect text logs from pods to Simple Log Service.

CRD - AliyunLogConfig

To create a Logtail configuration, you need to only create a CR from the AliyunLogConfig CRD. After the CR is created, the Logtail configuration takes effect.

Important

If you create a Logtail configuration by creating a CR and you want to modify the Logtail configuration, you can only modify the CR. If you modify the Logtail configuration in the Simple Log Service console, the new settings are not synchronized to the CR.

  1. Log on to the ACK console.

  2. In the left-side navigation pane, click Clusters.

  3. On the Clusters page, find the cluster that you want to manage and click More in the Actions column. In the drop-down list that appears, click Manage ACK clusters.

  4. Create a file named example-k8s-file.yaml.

    The following code provides an example of a YAML file used to create a Logtail configuration named example-k8s-file. You can use the Logtail configuration to collect text logs from the test.LOG file in the /data/logs/app_1 directory of the containers whose names start with app in your cluster in simple mode to the automatically created k8s-file Logstore in the k8s-log-<YOUR_CLUSTER_ID> project.

    You can modify the log file path in the example based on your business requirements. For more information, see File path mapping for containers.

    • logPath: the log file path. Example: /data/logs/app_1.

    • filePattern: the name of the file from which you want to collect logs. Example: test.LOG.

    The logtailConfig parameter specifies the Logtail details, which include the types of input, output, and processing plug-ins and container filtering methods. For more information, see AliyunLogConfigDetail. For more information about the complete parameters in the YAML file, see CR parameters.

    apiVersion: log.alibabacloud.com/v1alpha1
    kind: AliyunLogConfig
    metadata:
      # Specify the name of the resource. The name must be unique in the current Kubernetes cluster.
      name: example-k8s-file
      # Specify the namespace to which the resource belongs.
      namespace: kube-system
    spec:
      # Specify the name of the project. If you leave this parameter empty, the project named k8s-log-<your_cluster_id> is used.
      # project: k8s-log-test
      # Specify the name of the Logstore. If the specified Logstore does not exist, Simple Log Service automatically creates a Logstore.
      logstore: k8s-file
      # Create a Logtail configuration.
      logtailConfig:
        # Specify the type of the data source. If you want to collect text logs, set the value to file.
        inputType: file
        # Specify the name of the Logtail configuration. The name must be the same as the resource name that is specified by metadata.name.
        configName: example-k8s-file
        inputDetail:
          # Specify the settings that allow Logtail to collect text logs in simple mode.
          logType: common_reg_log
          # Specify the log file path.
          logPath: /data/logs/app_1
          # Specify the log file name. You can use wildcard characters such as asterisks (*) and question marks (?) when you specify the log file name. Example: log_*.log.
          filePattern: test.LOG
          # If you want to collect text logs from containers, set the value to true.
          dockerFile: true
          # Enable multi-line log collection. If you want to collect single-line logs, delete this parameter.
          # Specify the regular expression to match the beginning of the first line of a log.
          logBeginRegex: \d+-\d+-\d+.*
          # Specify the conditions to filter containers.
          advanced:
            k8s:
              K8sPodRegex: '^(app.*)$'
  5. Run the kubectl apply -f example-k8s-file.yaml command. Then, Logtail starts to collect text logs from pods to Simple Log Service.

Simple Log Service console

Note

This method is suitable for scenarios in which simple settings are required to manage a Logtail configuration without the need to log on to a Kubernetes cluster. You cannot batch create Logtail configurations by using this method.

  1. Log on to the Simple Log Service console.

  2. In the Projects section, click the project that you use to install the Logtail components. Example: k8s-log-<your_cluster_id>. On the page that appears, click the Logstore that you want to manage, and then click Logtail Configurations. On the Logtail Configuration page, click Add Logtail Configuration. In the Quick Data Import dialog box, find the Kubernetes - File card and click Integrate Now.image

  3. In the Machine Group Configurations step of the Import Data wizard, set the Scenario parameter to Kubernetes Clusters and the Deployment Method parameter to ACK Daemonset, select the k8s-group-${your_k8s_cluster_id} machine group and click the > icon to move the machine group from the Source Machine Group section to the Applied Server Groups section, and then click Next.image

  4. Create a Logtail configuration. In the Logtail Configuration step of the Import Data wizard, configure the required parameters and click Next. Approximately 1 minute is required to create a Logtail configuration.

    The following list describes the main parameter settings. For more information, see Create a Logtail configuration.

    • Global Configurations

      In the Global Configurations section, configure the Configuration Name parameter.

      image

    • Input Configurations

      • Logtail Deployment Mode: the Logtail deployment mode. Select Daemonset.

      • File Path Type: the type of the file path that you want to use to collect logs. Valid values: Path in Container and Host Path. If a hostPath volume is mounted to a container and you want to collect logs from files based on the mapped file path on the container host, set this parameter to Host Path. In other scenarios, set this parameter to Path in Container.

      • File Path: the directory used to store the logs that you want to collect. The file path must start with a forward slash (/). In this example, set the File Path parameter to /data/wwwlogs/main/**/*.Log, which indicates that logs are collected from files suffixed with .Log in the /data/wwwlogs/main directory. You can configure the Maximum Directory Monitoring Depth parameter to specify the maximum number of levels of the subdirectories that you want to monitor. The subdirectories are in the log file directory that you specify. This parameter specifies the levels of the subdirectories that the ** wildcard characters can match in the value of the File Path parameter.image

  5. Create indexes and preview data. By default, full-text indexing is enabled for Simple Log Service. In this case, full-text indexes are created. You can query all fields in logs based on the indexes. You can also manually create indexes for fields based on the collected logs. Alternatively, you can click Automatic Index Generation. Then, Simple Log Service generates indexes for fields. You can query data in an accurate manner based on field indexes. This reduces indexing costs and improves query efficiency. For more information, see Create indexes.image

Environment variable

Note

This method supports only single-line text logs. If you want to collect multi-line text logs or logs of other formats, use the preceding methods.

  1. Create an application and configure Simple Log Service.

    Configure Simple Log Service in the ACK console

    1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

    2. On the Clusters page, click the name of the cluster that you want to manage. In the left-side navigation pane, choose Workloads > Deployments.

    3. On the Deployments page, click Create from Image.

    4. In the Basic Information step of the Create wizard, configure the Name parameter, and then click Next. In the Container step of the Create wizard, configure the Image Name parameter.

      The following section describes only the settings related to Simple Log Service. For more information about other settings, see Create a stateless application by using a Deployment.

    5. In the Log section, configure log-related information.

      1. Create a Logtail configuration.

        Click Collection Configuration to create a Logtail configuration. Each Logtail configuration consists of the Logstore and Log Path In Container parameters.

        • Logstore: the name of the Logstore that is used to store the collected logs. If the Logstore does not exist, ACK automatically creates a Logstore in the Simple Log Service project that is associated with your ACK cluster.

          Note

          The default retention period of logs in a Logstore is 90 days.

        • Log Path in Container: the path from which you want to collect logs. A value of /usr/local/tomcat/logs/catalina.*.log indicates that Logtail collects text logs from the Tomcat application.image

          By default, each Logstore corresponds to a Logtail configuration that you can use to collect logs by line in simple mode.

      2. Create custom tags.

        Click Custom Tag to create custom tags. Each custom tag is a key-value pair and is added to the collected logs. You can use custom tags to identify container logs. For example, you can specify a version number for the Tag Value parameter.image

    Configure Simple Log Service by using a YAML template

    1. Log on to the Container Service for Kubernetes console. In the left-side navigation pane, click Clusters.

    2. On the Clusters page, click the name of the cluster that you want to manage. In the left-side navigation pane, choose Workloads > Deployments.

    3. On the Deployments page, select a namespace from the Namespace drop-down list in the upper part of the page. Then, click Create From YAML in the upper-right corner of the page.

    4. Configure a YAML template.

      The syntax of the YAML template is the same as the Kubernetes syntax. However, to specify a collection configuration for a container, you must use env to add Collection Configurations and Custom Tags to the container. You must also create the corresponding volumeMounts and volumes based on the collection configuration. The following sample code provides an example of pod configurations:

      apiVersion: v1
      kind: Pod
      metadata:
        name: my-demo
      spec:
        containers:
        - name: my-demo-app
          image: 'registry.cn-hangzhou.aliyuncs.com/log-service/docker-log-test:latest'
          env:
          # Configure environment variables
          - name: aliyun_logs_log-varlog
            value: /var/log/*.log
          - name: aliyun_logs_mytag1_tags
            value: tag1=v1
          # Configure volume mounting
          volumeMounts:
          - name: volumn-sls-mydemo
            mountPath: /var/log
          # If the pod is repetitively restarted, you can add a sleep command to the startup parameters of the pod.
          command: ["sh", "-c"]  # Run commands in the shell.
          args: ["sleep 3600"]   # Make the pod sleep 3,600 seconds (1 hour).
        volumes:
        - name: volumn-sls-mydemo
          emptyDir: {}
      1. Use environment variables to create Collection Configurations and Custom Tags. All environment variables related to configurations use aliyun_logs_ as the prefix.

        • Add log collection configurations in the following format:

          - name: aliyun_logs_log-varlog
            value: /var/log/*.log                        

          In the example, a collection configuration is created in the aliyun_logs_{key} format. The value of {key} is log-varlog.

          • aliyun_logs_log-varlog: This environment variable indicates that a configuration is created to collect logs from the /var/log/*.log path and store the logs in a Logstore named log-varlog. The name of the Simple Log Service collection configuration is also log-varlog. The purpose is to collect the content of the /var/log/*.log file in the container to the log-varlog Logstore.

        • Create Custom Tags in the following format:

          - name: aliyun_logs_mytag1_tags
            value: tag1=v1                       

          After a tag is added, the tag is automatically appended to the log data that is collected from the container. mytag1 is a name that does not contain underscores (_).

      2. If your collection configuration specifies a collection path other than stdout, you must create the corresponding volumeMounts in this section.

        In the example, the collection configuration adds the collection of /var/log/*.log. Therefore, the corresponding volumeMounts for /var/log is added.

    5. After you complete the YAML template, click Create. The Kubernetes cluster executes the configuration.

  2. Use environment variables to configure advanced settings.

    Environment variable-based Logtail configuration supports various parameters. You can use environment variables to configure advanced settings to meet your log collection requirements.

    Important

    You cannot use environment variables to configure log collection in edge computing scenarios.

    Variable

    Description

    Example

    Usage note

    aliyun_logs_{key}

    • Required. {key} can contain only lowercase letters, digits, and hyphens (-).

    • If the specified aliyun_logs_{key}_logstore variable does not exist, a Logstore named {key} is automatically created to store the collected logs.

    • To collect the stdout of a container, set the value to stdout. You can also set the value to a log file path in the containers.

    • - name: aliyun_logs_catalina
      
        value: stdout
    • - name: aliyun_logs_access-log
      
        value: /var/log/nginx/access.log
    • By default, logs are collected in simple mode. If you want to parse the collected logs, we recommend that you configure the related settings in the Simple Log Service console or by using CRDs.

    • {key} specifies the name of the Logtail configuration. The configuration name must be unique in the Kubernetes cluster.

    aliyun_logs_{key}_tags

    Optional. The variable is used to add tags to logs. The value must be in the {tag-key}={tag-value} format.

    - name: aliyun_logs_catalina_tags
    
      value: app=catalina

    N/A.

    aliyun_logs_{key}_project

    Optional. The variable specifies a Simple Log Service project. The default project is the one that is generated after Logtail is installed.

    - name: aliyun_logs_catalina_project
    
      value: my-k8s-project

    The project must be deployed in the same region as Logtail.

    aliyun_logs_{key}_logstore

    Optional. The variable specifies a Simple Log Service Logstore. Default value: {key}.

    - name: aliyun_logs_catalina_logstore
    
      value: my-logstore

    N/A.

    aliyun_logs_{key}_shard

    Optional. The variable specifies the number of shards of the Logstore. Valid values: 1 to 10. Default value: 2.

    Note

    If the Logstore that you specify already exists, this variable does not take effect.

    - name: aliyun_logs_catalina_shard
    
      value: '4'

    N/A.

    aliyun_logs_{key}_ttl

    Optional. The variable specifies the log retention period. Valid values: 1 to 3650.

    • If you set the value to 3650, logs are permanently stored.

    • The default retention period is 90 days.

    Note

    If the Logstore that you specify already exists, this variable does not take effect.

    - name: aliyun_logs_catalina_ttl
    
      value: '3650'

    N/A.

    aliyun_logs_{key}_machinegroup

    Optional. The variable specifies the machine group in which the application is deployed. The default machine group is the one in which Logtail is deployed. For more information about how to use this variable, see Collect container logs from an ACK cluster.

    - name: aliyun_logs_catalina_machinegroup
    
      value: my-machine-group

    N/A.

    aliyun_logs_{key}_logstoremode

    Optional. The variable specifies the type of Logstore. Default value: standard. Valid values: standard and query.

    Note

    If the Logstore that you specify already exists, this variable does not take effect.

    • standard: Standard Logstore. This type of Logstore supports the log analysis feature and is suitable for scenarios such as real-time monitoring and interactive analysis. You can use this type of Logstore to build a comprehensive observability system.

    • query: Query Logstore. This type of Logstore supports high-performance queries. The index traffic fee of a Query Logstore is approximately half that of a standard Logstore. Query Logstores do not support SQL analysis. Query Logstores are suitable for scenarios in which the amount of data is large, the log retention period is long, or log analysis is not required. If logs are stored for weeks or months, the log retention period is considered long.

    • - name: aliyun_logs_catalina_logstoremode
        value: standard 
    • - name: aliyun_logs_catalina_logstoremode
        value: query 

    To use this variable, make sure that the image version of the logtail-ds component is 1.3.1 or later.

    • Custom requirement 1: Collect data from multiple applications to the same Logstore

      In this scenario, configure the aliyun_logs_{key}_logstore parameter. The following example shows how to collect stdout from two applications to the stdout-logstore Logstore.

      The {key} of Application 1 is set to app1-stdout, and the {key} of Application 2 is set to app2-stdout.

      Configure the following environment variables for Application 1:

      # Configure environment variables.
          - name: aliyun_logs_app1-stdout
            value: stdout
          - name: aliyun_logs_app1-stdout_logstore
            value: stdout-logstore

      Configure the following environment variables for Application 2:

      # Configure environment variables.
          - name: aliyun_logs_app2-stdout
            value: stdout
          - name: aliyun_logs_app2-stdout_logstore
            value: stdout-logstore
    • Custom requirement 2: Collect data from multiple applications to different projects

      In this scenario, perform the following steps:

      1. Create a machine group in each project and set the custom identifier of the machine group in the following format: k8s-group-{cluster-id}, where {cluster-id} is the ID of the cluster. You can specify a custom machine group name.

      2. Specify the project, Logstore, and machine group in the environment variables for each application. The name of the machine group is the same as the one that you create in the previous step.

        In the following example, the {key} of Application 1 is set to app1-stdout, and the {key} of Application 2 is set to app2-stdout. If the two applications are deployed in the same Kubernetes cluster, you can use the same machine group for the applications.

        Configure the following environment variables for Application 1:

        # Configure environment variables.
            - name: aliyun_logs_app1-stdout
              value: stdout
            - name: aliyun_logs_app1-stdout_project
              value: app1-project
            - name: aliyun_logs_app1-stdout_logstore
              value: app1-logstore
            - name: aliyun_logs_app1-stdout_machinegroup
              value: app1-machine-group

        Configure the following environment variables for Application 2:

        # Configure environment variables.
            - name: aliyun_logs_app2-stdout
              value: stdout
            - name: aliyun_logs_app2-stdout_project
              value: app2-project
            - name: aliyun_logs_app2-stdout_logstore
              value: app2-logstore
            - name: aliyun_logs_app2-stdout_machinegroup
              value: app1-machine-group

Step 3: Query and analyze logs

  1. Log on to the Simple Log Service console.

  2. In the Projects section, click the project that you want to manage to go to the details page of the project.

    image

  3. In the left-side navigation pane, click the 图标 icon of the Logstore that you want to manage. In the drop-down list, select Search & Analysis to view the logs that are collected from your Kubernetes cluster.

    image

Default fields in container text logs

The following table describes the fields that are included by default in each container text log.

Field name

Description

__tag__:__hostname__

The name of the container host.

__tag__:__path__

The log file path in the container.

__tag__:_container_ip_

The IP address of the container.

__tag__:_image_name_

The name of the image that is used by the container.

__tag__:_pod_name_

The name of the pod.

__tag__:_namespace_

The namespace to which the pod belongs.

__tag__:_pod_uid_

The unique identifier (UID) of the pod.

References