Simple Log Service lets you install Logtail in DaemonSet or Sidecar mode and collect logs from a Kubernetes cluster. For information about the differences between the modes, see Install Logtail to collect logs from a Kubernetes cluster. This topic explains how to deploy Logtail in DaemonSet mode and collect standard output from Alibaba Cloud Container Service for Kubernetes (ACK) clusters.
Prerequisite
Simple Log Service is activated.
Considerations
To collect text logs, see Collect text logs from an ACK cluster in DaemonSet mode.
This topic is applicable only to ACK managed and dedicated clusters.
To collect container application logs from ACK Serverless clusters, see Collect application logs by using pod environment variables.
If you are using a self-managed Kubernetes cluster or your Alibaba Cloud ACK cluster and Simple Log Service belong to different Alibaba Cloud accounts, see Collect stdout and stderr from a self-managed cluster in DaemonSet mode (old version).
Step 1: Install Logtail components
Install Logtail components in an existing ACK cluster
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the one you want to manage and click its name. In the left-side navigation pane, choose .
On the Logs and Monitoring tab of the Add-ons page, find the logtail-ds component and click Install.
Install Logtail components when you create an ACK cluster
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, click Create Kubernetes Cluster. In the Component Configurations step of the wizard, select Enable Log Service.
This topic describes only the settings related to Simple Log Service. For more information about other settings, see Create an ACK managed cluster.
After you select Enable Log Service, the system prompts you to create a Simple Log Service project. You can use one of the following methods to create a project:
Select Project
You can select an existing project to manage the collected container logs.
Create Project
Simple Log Service automatically creates a project to manage the collected container logs.
ClusterID
indicates the unique identifier of the created Kubernetes cluster.
In the Component Configurations step of the wizard, Enable is selected for the Control Plane Component Logs parameter by default. If Enable is selected, the system automatically configures collection settings and collects logs from the control plane components of a cluster, and you are charged for the collected logs based on the pay-as-you-go billing method. You can determine whether to select Enable based on your business requirements. For more information, see Collect logs of control plane components in ACK managed clusters.
After the Logtail components are installed, Simple Log Service automatically generates a project named k8s-log-<YOUR_CLUSTER_ID>
and resources in the project. You can log on to the Simple Log Service console to view the resources. The following table describes the resources.
Resource type | Resource name | Description | Example |
Machine group | k8s-group- | The machine group of logtail-daemonset, which is used in log collection scenarios. | k8s-group-my-cluster-123 |
k8s-group- | The machine group of logtail-statefulset, which is used in metric collection scenarios. | k8s-group-my-cluster-123-statefulset | |
k8s-group- | The machine group of a single instance, which is used to create a Logtail configuration for the single instance. | k8s-group-my-cluster-123-singleton | |
Logstore | config-operation-log | The logstore is used to store logs of the alibaba-log-controller component. We recommend that you do not create a Logtail configuration for the logstore. You can delete the logstore. After the logstore is deleted, the system no longer collects the operational logs of the alibaba-log-controller component. You are charged for the logstore in the same manner as you are charged for regular logstores. For more information, see Billable items of pay-by-ingested-data. | None |
Step 2: Create Logtail configurations
Here are the methods to create Logtail configurations (use only one of them):
Method | Configuration description | Scenario |
CRD - AliyunPipelineConfig (recommended) | You can use the AliyunPipelineConfig Custom Resource Definition (CRD), which is a Kubernetes CRD, to manage a Logtail configuration. | This method is suitable for scenarios that require complex collection and processing, and version consistency between the Logtail configuration and the Logtail container in an ACK cluster. Note The logtail-ds component installed on an ACK cluster must be later than V1.8.10. For more information about how to update Logtail, see Update Logtail to the latest version. |
Simple Log Service console | You can manage a Logtail configuration in the GUI based on quick deployment and configuration. | This method is suitable for scenarios in which simple settings are required to manage a Logtail configuration. If you use this method to manage a Logtail configuration, specific advanced features and custom settings cannot be used. |
Environment variable | You can use environment variables to configure parameters used to manage a Logtail configuration in an efficient manner. | You can use environment variables only to configure simple settings. Complex processing logic is not supported. Only single-line text logs are supported. You can use environment variables to create a Logtail configuration that can meet the following requirements:
|
CRD - AliyunLogConfig | You can use the AliyunLogConfig CRD, which is an old version CRD, to manage a Logtail configuration. | This method is suitable for known scenarios in which you can use the old version CRD to manage Logtail configurations. You must gradually replace the AliyunLogConfig CRD with the AliyunPipelineConfig CRD to obtain better extensibility and stability. For more information about the differences between the two CRDs, see CRDs. |
Crd-AliyunPipelineConfig (recommended)
To create Logtail configurations, simply create the AliyunPipelineConfig custom resources, which will take effect automatically.
For configurations created through custom resources, modifications must be made by updating the corresponding custom resource. Changes made in the Simple Log Service console will not sync to the custom resource.
Log on to the ACK console.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
On the Custom Resources page, click the CRDs tab, then click Create from YAML.
Modify the parameters in the following YAML example as needed, copy and paste it into the template, then click Create.
NoteYou can use the Logtail configuration generator to create a target scenario YAML script. This tool helps you quickly complete the configuration and reduces manual operations.
The example YAML file below captures standard output from Pods labeled with
app: ^(.*test.*)$
within the default namespace, using multi-line text mode, and forwards it to a logstore calledk8s-stdout
, which is automatically created within a project namedk8s-log-<YOUR_CLUSTER_ID>
. Adjust the parameters in the YAML as needed:project
: Log on to the Simple Log Service Console, and identify the project name created by the Logtail you installed, typically in the formatk8s-log-<YOUR_CLUSTER_ID>
.IncludeK8sLabel
: Used to filter the labels of the target pod. For example,app: ^(.*test.*)$
indicates that the label key isapp
, and it will collect pods with values that includetest
.Endpoint
andRegion
: For example,ap-southeast-1.log.aliyuncs.com
andap-southeast-1
.
For more information on
config
in the YAML file, such as supported inputs, outputs, processing plug-in types, and container filtering methods, see PipelineConfig. For a comprehensive list of YAML parameters, see CR parameters.apiVersion: telemetry.alibabacloud.com/v1alpha1 # Create a ClusterAliyunPipelineConfig. kind: ClusterAliyunPipelineConfig metadata: # Specify the name of the resource. The name must be unique in the current Kubernetes cluster. This name is also the name of the Logtail configuration created. name: example-k8s-stdout spec: # Specify the target project. project: name: k8s-log-<YOUR_CLUSTER_ID> # Create a logstore for storing logs. logstores: - name: k8s-stdout # Define the Logtail configuration. config: # Sample log (optional) sample: | 2024-06-19 16:35:00 INFO test log line-1 line-2 end # Define input plug-ins. inputs: # Use the service_docker_stdout plug-in to collect text logs inside the container. - Type: service_docker_stdout Stdout: true Stderr: true # Configure container information filter conditions. Multiple options are in an "and" relationship. # Specify the namespace to which the pod containing the container to be collected belongs. Supports regular expression matching. K8sNamespaceRegex: "^(default)$" # Enable container metadata preview. CollectContainersFlag: true # Collect containers that meet the Pod label conditions. Multiple entries are in an "or" relationship. IncludeK8sLabel: app: ^(.*test.*)$ # Configure multi-line chunk configuration. Invalid configuration for single-line log collection. # Configure the regular expression for the beginning of the line. BeginLineRegex: \d+-\d+-\d+.* # Define output plug-ins flushers: # Use the flusher_sls plug-in to send logs to the specified logstore. - Type: flusher_sls # Make sure that the logstore exists. Logstore: k8s-stdout # Make sure that the endpoint is valid. Endpoint: ap-southeast-1.log.aliyuncs.com Region: ap-southeast-1 TelemetryType: logs
CRD-AliyunLogConfig
To create Logtail configurations, simply create the AliyunLogConfig custom resources, which will take effect automatically.
For configurations created through custom resources, modifications must be made by updating the corresponding custom resource. Changes made in the Simple Log Service Console will not sync to the custom resource.
Log on to the ACK console.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
On the Custom Resources page, click the CRDs tab, then click Create from YAML.
Modify the parameters in the following YAML example as needed, copy and paste it into the template, then click Create.
This YAML script will establish a Logtail configuration called
simple-stdout-example
. It will collect the standard output from all containers within the cluster that have names beginning withapp
, using multi-line mode. The collected data will then be transmitted to a logstore calledk8s-stdout
within a project namedk8s-log-<YOUR_CLUSTER_ID>
.For more information on the logtailConfig item in the YAML file, including supported inputs, outputs, processing plug-in types, and container filtering methods, see AliyunLogConfigDetail. For a comprehensive list of YAML parameters, see CR parameters.
# Standard output configuration apiVersion: log.alibabacloud.com/v1alpha1 kind: AliyunLogConfig metadata: # Specify the name of the resource. The name must be unique in the current Kubernetes cluster. name: simple-stdout-example spec: # Specify the target project name (optional, default is k8s-log-<your_cluster_id>) # project: k8s-log-test # Specify the name of the logstore. If the specified logstore does not exist, Simple Log Service automatically creates a Logstore. logstore: k8s-stdout # Specify the Logtail configuration. logtailConfig: # Specify the type of the data source. To collect standard output, set the value to plugin. inputType: plugin # Specify the name of the Logtail collection configuration. The name must be the same as the resource name that is specified in metadata.name. configName: simple-stdout-example inputDetail: plugin: inputs: - type: service_docker_stdout detail: # Specify the collection of stdout and stderr. Stdout: true Stderr: true # Specify the namespace to which the pod containing the container to be collected belongs. Supports regular expression matching. K8sNamespaceRegex: "^(default)$" # Specify the name of the container to be collected. Supports regular expression matching. K8sContainerRegex: "^(app.*)$" # Configure multi-line chunk configuration. # Configure the regular expression for the beginning of the line. BeginLineRegex: \d+-\d+-\d+.*
Simple Log Service console
Log on to the Simple Log Service Console.
Select your project from the list, such as
k8s-log-<YOUR_CLUSTER_ID>
. On the project page, click Logtail Configurations for the target logstore, click Add Logtail Configuration, and click Integrate Now under K8s - Stdout and Stderr - Old Version.Since Logtail is already installed for the ACK cluster, select Use Existing Machine Groups.
On the Machine Group Configurations page, select the k8s-group-${your_k8s_cluster_id} machine group under the ACK Daemonset method in the Kubernetes Clusters scenario, add it to the applied machine group, then click Next.
Create a Logtail configuration, enter the required configurations as described below, and click Next. It will take about 1 minute for the configuration to take effect.
This section covers only the necessary configurations. For a complete list, see Global Configurations.
Global Configuration
Enter the configuration name in Global Configuration.
Create indexes and preview data: By default, Simple Log Service enables a full-text index, indexing all fields in the log for queries. You can also create a field index manually based on the collected logs, or click Automatic Index Generation. This will generate a field index for term queries on specific fields, reducing index costs and improving query efficiency.
Environment variables
Configure Simple Log Service when creating an application.
Console
Log on to the Container Service Management Console and click Clusters in the left-side navigation pane.
On the Clusters page, click the target cluster, then select from the left-side navigation pane.
On the Deployments page, select a namespace and click Create from Image.
On the Basic Information page, set the application name, then click Next to enter the Container page.
This section introduces configurations related to Simple Log Service. For more information about other application configurations, see Create a stateless application by using a Deployment.
In the Log section, configure log-related information.
Set Collection Configuration.
Click Collection Configuration to create a new collection configuration. Each configuration consists of two items:
Logstore: Specify the logstore where the collected logs are stored. If the logstore does not exist, ACK will automatically create it in the Simple Log Service project associated with your cluster.
NoteThe default log retention period for newly created logstores is 90 days.
Log Path in Container: Specify stdout to collect standard output and error output from the container.
Each collection configuration is automatically created as a logstore configuration, and logs are collected in simple mode (by row) by default.
Set Custom Tag.
Click Custom Tag to create a custom tag. Each tag is a key-value pair that will be appended to the collected logs. Use it to label the log data of the container, such as the version number.
After configuring all settings, click Next. For subsequent steps, see Create a stateless application by using a Deployment.
YAML template
Log on to the Container Service Management Console or the , and select Clusters from the left-side navigation pane.
On the Clusters page, click the target cluster name, and then select from the left-side navigation pane.
On the Deployments page, select a namespace and click Create from YAML.
Configure the YAML file.
The syntax of the YAML template is consistent with Kubernetes. To specify the collection configuration for the container, use
env
to add Collection Configuration and Custom Tag to the container, and create correspondingvolumeMounts
andvolumes
based on the collection configuration. Below is a simple pod example:apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: '1' labels: app: deployment-stdout cluster_label: CLUSTER-LABEL-A name: deployment-stdout namespace: default spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: deployment-stdout strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: app: deployment-stdout cluster_label: CLUSTER-LABEL-A spec: containers: - args: - >- while true; do date '+%Y-%m-%d %H:%M:%S'; echo 1; echo 2; echo 3; echo 4; echo 5; echo 6; echo 7; echo 8; echo 9; sleep 10; done command: - /bin/sh - '-c' - '--' env: - name: cluster_id value: CLUSTER-A - name: aliyun_logs_log-stdout value: stdout image: 'mirrors-ssl.aliyuncs.com/busybox:latest' imagePullPolicy: IfNotPresent name: timestamp-test resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30
Create your Collection Configuration and Custom Tag using environment variables. All related environment variables start with the prefix
aliyun_logs_
.The rules for creating collection configurations are as follows:
- name: aliyun_logs_log-varlog value: /var/log/*.log
The example creates a collection configuration in the format
aliyun_logs_{key}
, where{key}
islog-varlog
.aliyun_logs_log-varlog
: This variable indicates the creation of alogstore
namedlog-varlog
. The log collection path is set to /var/log/*.log, and the corresponding Simple Log Service collection configuration name is alsolog-varlog
. The goal is to collect the contents of the container's /var/log/*.log files into thelog-varlog
Logstore
.
The rules for Creating Custom Tags are as follows:
- name: aliyun_logs_mytag1_tags value: tag1=v1
After a tag is configured, the corresponding field is automatically appended to the log data collected from the container. The
mytag1
must beany name that does not contain '_'
.
If your collection configuration specifies a path other than stdout, you need to create the corresponding
volumeMounts
in this section.In the example, the collection configuration adds collection for /var/log/*.log, so the corresponding
volumeMounts
for /var/log is added.
After completing the YAML, click Create to submit the configuration to the Kubernetes cluster for execution.
Configure advanced parameters for environment variables.
Environment variables support various configuration parameters for log collection. Set advanced parameters as needed.
ImportantConfiguring log collection through environment variables is not suitable for edge computing scenarios.
Field
Description
Example
Notes
aliyun_logs_{key}
Required. {key} can contain only lowercase letters, digits, and hyphens (-).
If aliyun_logs_{key}_logstore does not exist, a logstore named {key} is created by default.
If the value is stdout, it indicates the collection of the container's standard output. Other values indicate the log path inside the container.
- name: aliyun_logs_catalina value: stdout
- name: aliyun_logs_access-log value: /var/log/nginx/access.log
The default collection mode is simple mode. To parse log content, use the Simple Log Service Console and refer to Collect text logs from Kubernetes containers in DaemonSet mode or Collect stdout and stderr from Kubernetes containers in DaemonSet mode (old version) for configuration.
{key} specifies the name of the Logtail configuration in Simple Log Service. The configuration name must be unique in the Kubernetes cluster.
aliyun_logs_{key}_tags
Optional. The value must be in the format {tag-key}={tag-value} and is used to tag the logs.
- name: aliyun_logs_catalina_tags value: app=catalina
None.
aliyun_logs_{key}_project
Optional. The value specifies a project in Simple Log Service. If this environment variable does not exist, the project you selected during installation is used.
- name: aliyun_logs_catalina_project value: my-k8s-project
The project must be deployed in the same region as Logtail.
aliyun_logs_{key}_logstore
Optional. The value specifies a Logstore in Simple Log Service. If this environment variable does not exist, the Logstore is the same as {key}.
- name: aliyun_logs_catalina_logstore value: my-logstore
None.
aliyun_logs_{key}_shard
Optional. The value specifies the number of shards when creating a logstore. Valid values: 1 to 10. If this environment variable does not exist, the value is 2.
NoteIf the logstore already exists, this parameter does not take effect.
- name: aliyun_logs_catalina_shard value: '4'
None.
aliyun_logs_{key}_ttl
Optional. The value specifies the log retention period. Valid values: 1 to 3650.
If the value is 3650, the log retention period is set to permanent.
If this environment variable does not exist, the default log retention period is 90 days.
NoteIf the logstore already exists, this parameter does not take effect.
- name: aliyun_logs_catalina_ttl value: '3650'
None.
aliyun_logs_{key}_machinegroup
Optional. The value specifies the machine group of the application. If this environment variable does not exist, the default machine group where Logtail is installed is used. For detailed usage of this parameter, see Collect container logs from ACK clusters.
- name: aliyun_logs_catalina_machinegroup value: my-machine-group
None.
aliyun_logs_{key}_logstoremode
Optional. The value specifies the logstore type. Default value: standard. Valid values:
NoteIf the logstore already exists, this parameter does not take effect.
standard: Supports one-stop data analysis features of Simple Log Service. Suitable for real-time monitoring, interactive analysis, and building complete observability systems.
query: Supports high-performance queries. The index traffic cost is about half of standard, but does not support SQL analysis. Suitable for scenarios with large data volumes, long storage periods (weekly or monthly), and no log analysis.
- name: aliyun_logs_catalina_logstoremode value: standard
- name: aliyun_logs_catalina_logstoremode value: query
This parameter requires the logtail-ds image version to be >=1.3.1.
Customization requirement 1: Collect data from multiple applications into the same logstore
To collect data from multiple applications into the same Logstore, set the aliyun_logs_{key}_logstore parameter. For example, the following configuration collects stdout from two applications into stdout-logstore.
In the example, the
{key}
for Application 1 isapp1-stdout
, while for Application 2 it is{key}
app2-stdout
.The environment variables for Application 1 are as follows:
# Configure environment variables - name: aliyun_logs_app1-stdout value: stdout - name: aliyun_logs_app1-stdout_logstore value: stdout-logstore
The environment variables for Application 2 are as follows:
# Configure environment variables - name: aliyun_logs_app2-stdout value: stdout - name: aliyun_logs_app2-stdout_logstore value: stdout-logstore
Customization requirement 2: Collect data from different applications into different projects
To collect data from different applications into multiple projects, follow these steps:
Create a machine group in each project with a custom ID named
k8s-group-{cluster-id}
, where{cluster-id}
is your cluster ID. The machine group name is customizable.Configure the project, logstore, and machine group information in the environment variables for each application. Use the same machine group name as created in the previous step.
In the following example, the
{key}
for Application 1 isapp1-stdout
, and the{key}
for Application 2 isapp2-stdout
. If both applications are deployed within the same K8s cluster, you can utilize the same machine group for them.The environment variables for Application 1 are as follows:
# Configure environment variables - name: aliyun_logs_app1-stdout value: stdout - name: aliyun_logs_app1-stdout_project value: app1-project - name: aliyun_logs_app1-stdout_logstore value: app1-logstore - name: aliyun_logs_app1-stdout_machinegroup value: app1-machine-group
The environment variables for Application 2 are as follows:
# Application 2 Configure environment variables - name: aliyun_logs_app2-stdout value: stdout - name: aliyun_logs_app2-stdout_project value: app2-project - name: aliyun_logs_app2-stdout_logstore value: app2-logstore - name: aliyun_logs_app2-stdout_machinegroup value: app1-machine-group
Step 3: Query and analyze logs
Log on to the Simple Log Service console.
In the Projects section, click the project that you want to manage to go to the details page of the project.
In the left-side navigation pane, click the
icon of the logstore that you want to manage. In the drop-down list, select Search & Analysis to view the logs that are collected from your Kubernetes cluster.
Default fields for container standard output (old version)
Each container standard output has the following default fields:
Field Name | Description |
_time_ | Log collection time. |
_source_ | Log source type, stdout or stderr. |
_image_name_ | Image name |
_container_name_ | Container name |
_pod_name_ | Pod name |
_namespace_ | Namespace where the pod is located |
_pod_uid_ | Unique identifier of the pod |
References
Create a dashboard to monitor the status of systems, applications, and services.
Configure alert rules to automatically generate alerts for exceptions in logs.
Troubleshoot collection errors: