Skip to main content

Quick Start: Routes

This chapter will help you get started with routes to forward data to specific destinations, walking you through common scenarios.

note

For a general discussion, see our overview chapter.

Configuration Files

note

Before configuring routes, verify that the pipelines and targets to be used in the routes are already configured and accessible.

See the introductory chapters for configuring targets and pipelines.

The simplest route that can be configured only relays the data as is:

routes:
- name: basic_forward
description: "Forward all logs to storage"
targets:
- name: storage

Here, we are forwarding the raw data to a previously configured target named storage.

Using Pipelines

A route can use a pipeline as part of its forwarding process.

We can have a single pipeline:

routes:
- name: process_logs
description: "Process and store logs"
pipelines:
- name: normalize_logs
targets:
- name: storage

This configuration is intended to normalize the data before sending it to the target named storage.

We can also use several pipelines consecutively:

routes:
- name: complex_processing
description: "Multi-stage processing"
pipelines:
- name: normalize
- name: enrich
- name: aggregate
targets:
- name: analytics

This time we are using 3 different pipelines whose intended purposes should be obvious from their names: normalizing, enriching, and aggregating. The data is then routed to a target used for analytics.

Selection

Since the routing operation occurs amidst high telemetry traffic, pipelines can also be used to selectively process specific data streams

This can be done using device types:

routes:
- name: syslog_route
if: device.type == 'syslog'
pipelines:
- name: syslog_normalize
targets:
- name: syslog_storage

- name: windows_route
if: device.type == 'windows'
pipelines:
- name: windows_normalize
targets:
- name: windows_storage

Our route collects two streams of data from a syslog device and a windows device, normalizes them using their own pipelines, and then directs them to their respective targets.

The selection can also be done using datasets:

routes:
- name: security_dataset
if: dataset.name == 'security_logs'
pipelines:
- name: security_process
targets:
- name: security_analytics

- name: performance_dataset
if: dataset.name == 'performance_metrics'
pipelines:
- name: metrics_process
targets:
- name: metrics_platform

Our first route collects data from a dataset used for security logs, and the second from another one used for performance metrics.

Forwarding

The same data can be sent to multiple targets, a technique known as mirroring:

routes:
- name: mirror_logs
description: "Store logs in multiple locations"
pipelines:
- name: normalize
targets:
- name: primary_storage
- name: backup_storage
- name: analytics_platform

The data here is received by 3 different targets: a primary and a backup storage, plus an analytics platform.

Conditionals

The filtering required for selecting the data for a route can be done using conditional statements, as seen from some of the examples above. The conditions can be as simple as picking a specific device type:

routes:
- name: firewall_logs
description: "Process firewall logs"
if: device.type == 'firewall'
pipelines:
- name: firewall_pipeline
targets:
- name: security_storage

Or they can be complex, drilling down to the attributes of the data to be collected such as severity of security breaches, date range of the data collected, etc.:

routes:
- name: critical_errors
if: log.severity == 'critical' && log.date >= '2024.05.01'
pipelines:
- name: urgent_process
targets:
- name: alerts
- name: storage

This route collects critical errors that occurred after a certain date and forwards them to two separate targets.

Monitoring

Optimizing