Skip to main content
Version: 1.5.1

Targets: Overview

Director uses targets as the output destinations to forward, store, and analyze collected telemetry data. Targets provide flexible options for data persistence and integration with various analysis platforms.

Definitions

Targets serve as the endpoint destinations for processed data within the Director system. They operate on the following principles:

  1. Output Configuration: Targets define where and how data should be delivered after processing.
  2. Format Adaptation: They handle the conversion of internal data structures to destination-specific formats.
  3. Delivery Management: Targets manage connection pooling, batching, and retry mechanisms.
  4. Destination Integration: They provide authentication and protocol-specific features for various systems.
note

Targets enable:

Persistence: Local and remote storage with various retention options.
Integration: Seamless connection to analytics platforms and messaging systems.

They also support data transformation, and delivery confirmation.

Configuration

All targets share the following base configuration fields:

FieldRequiredDefaultDescription
nameYUnique identifier for the target
descriptionNOptional explanation
typeYTarget type
statusNtrueEnable/disable the target
batch_sizeN1000Records per batch
tip

Each target type provides specific configuration options detailed in their respective sections.

Use the name of the target to refer to it in your configurations.

Example:

targets:
- name: elasticsearch
type: elasticsearch
properties:
hosts: ["http://elasticsearch:9200"]
index: "logs-%{+yyyy.MM.dd}"
username: "elastic"
password: "${PASSWORD}"

The target listed here is of type elasticsearch. It specifies the host that it will forward the data to and the index on the host to which the data will be appended. To access the host, it specifies a username and a password.

tip

You can use environment variables like ${PASSWORD} for your credentials. This will improve security by removing the credentials from your configuration file.

Debug Options

Targets support debug configuration options for testing, troubleshooting, and development purposes. These options allow you to inspect data flow without affecting production systems.

Configuration

Debug options are configured under the debug property within target properties:

targets:
- name: test_elastic
type: elastic
properties:
index: "test-logs"
endpoints:
- endpoint: "http://elasticsearch:9200"
debug:
status: true
dont_send_logs: false

Debug Fields

FieldRequiredDefaultDescription
debug.statusNfalseEnable debug logging for the target
debug.dont_send_logsNfalsePrevent logs from being sent to the actual target

Debug Status

When debug.status is set to true, the target logs each event to the internal debugger before processing. This provides visibility into:

  • Message content being sent
  • Device information (ID, name, type)
  • Target type and operation details
  • Timing and sequence of events

Debug logs are written to the system's debug output and can be used to:

  • Verify data transformation and formatting
  • Troubleshoot pipeline processing issues
  • Monitor data flow in development environments
  • Audit message content during testing

Don't Send Logs

When debug.dont_send_logs is set to true, events are logged to the debugger but not sent to the actual target destination. This is useful for:

  • Safe Testing: Test configuration changes without affecting production systems
  • Development: Develop and validate pipelines without external dependencies
  • Cost Control: Avoid charges from cloud services during testing
  • Dry Runs: Verify event formatting and routing logic before deployment
warning

The dont_send_logs option only works when debug.status is also set to true. If debugging is disabled, logs will be sent normally regardless of the dont_send_logs setting.

Use Cases

Development Environment

Test your configuration safely without sending data to production targets:

targets:
- name: dev_splunk
type: splunk
properties:
endpoints:
- endpoint: "https://splunk.example.com:8088/services/collector"
token: "YOUR-TOKEN"
index: "main"
debug:
status: true
dont_send_logs: true

Troubleshooting

Enable debug logging to diagnose issues while still sending data:

targets:
- name: debug_elastic
type: elastic
properties:
index: "production-logs"
endpoints:
- endpoint: "http://elasticsearch:9200"
debug:
status: true
dont_send_logs: false

Pipeline Validation

Verify pipeline transformations before enabling the target:

targets:
- name: validate_transformations
type: splunk
properties:
endpoints:
- endpoint: "https://splunk.example.com:8088/services/collector"
token: "YOUR-TOKEN"
field_format: "cim"
debug:
status: true
dont_send_logs: true
pipelines:
- name: test_pipeline
processors:
- set:
field: environment
value: "development"

Staged Deployment

Test new target configurations in parallel with existing ones:

targets:
# Production target (normal operation)
- name: prod_elastic
type: elastic
properties:
index: "production-logs"
endpoints:
- endpoint: "http://prod-elasticsearch:9200"

# Test target (debug mode, no actual sending)
- name: test_elastic
type: elastic
properties:
index: "test-logs"
endpoints:
- endpoint: "http://test-elasticsearch:9200"
debug:
status: true
dont_send_logs: true

Best Practices

Disable in Production: Always disable debug options in production environments to avoid performance overhead and excessive logging.

Use for Development: Enable dont_send_logs during development to prevent test data from reaching production systems.

Temporary Troubleshooting: Enable debug logging temporarily when investigating issues, then disable it once resolved.

Separate Configurations: Maintain separate configuration files for development and production environments with appropriate debug settings.

Monitor Debug Output: Ensure your logging system can handle the increased volume when debug logging is enabled.

Performance Considerations

Debug logging adds overhead to target processing:

  • Each event is serialized and written to the debug log
  • Additional function calls and memory allocation occur
  • Log I/O operations may impact throughput

For high-volume scenarios:

  • Disable debug logging in production
  • Use debug mode only for representative samples
  • Monitor system resources when debugging is enabled

Security Notes

Debug logs may contain sensitive information:

  • Message content is logged verbatim
  • Authentication tokens are not logged, but message content might contain PII
  • Ensure debug logs are secured with appropriate access controls
  • Review debug output before sharing for troubleshooting

Deployment

The following deployment types can be used:

  • One-to-many - data from a single source is routed to one or more destinations:

    Syslog → Local Storage + Analysis Platform

  • Many-to-one - data from multiple sources is routed to one destination:

    Syslog + Windows → Local Storage

  • Many-to-many - data from multiple sources is routed to multiple destinations:

    Syslog + Windows → Local Storage

    Syslog + Elasticsearch → Cloud Upload + Analysis Platform

  • Chained - data is routed sequentially from one destination to the next:

    Syslog → Local Storage → Analysis Platform

Multiple targets can be used for redundancy, normalization rules can be implemented, and alerts can be put in place for notification and error handling.

Target Types

Targets can be categorized into several functional types that serve different data management needs:

  • Analytical - These targets integrate with platforms designed for searching, analyzing, and visualizing data:

    • Elasticsearch, Splunk, and other search and analytics engines
  • Storage-Based - These targets focus on data persistence with various retention strategies:

    • Local files, S3 and other cloud storage services
  • Messaging - These targets publish data to distributed messaging systems:

    • Kafka, RabbitMQ, and other message brokers
  • Integration - These targets connect with external systems through APIs:

    • Webhooks, REST endpoints, and custom integrations

Use Cases

The most common uses of targets are:

  • Local analysis - Debug logging, performance analysis, audit trails, and temporary storage.

  • Cloud integration - Long-term storage, data warehousing, security analysis, and compliance monitoring.

  • Real-time analysis - Live monitoring, alert generation, trend analysis, and performance tracking.

  • Data lake building - Raw data storage, schema evolution, data partitioning, and analytics preparation.

To serve these ends, the following processing options are available:

  • Pipelines - Field normalization (for ECS, CIM, ASIM, CEF, LEEF, and CSL), data transformation, message batching, custom field mapping, schema validation, and format conversion.

  • Buffer management - Configurable buffer sizes, batch processing, flush intervals, queue management, checkpoint recovery, and error handling.

  • Performance - Asynchronous writing, buffer optimization, connection pooling, retry mechanisms, resource monitoring, and size-based rotation.

  • Security - Authentication using API keys, service principals, and client certificates. Encryption with TLS/SSL, HTTPS, or custom algorithms. Also, access control and audit logging.

Implementation Strategies

Storage Types

Local

Director supports the following local data output methods:

  • Console - Direct stdout writing with real-time message viewing. Also provides debugging and testing capability, format normalization, and synchronous writing with mutex locking

  • Files - Multiple file formats are supported:

    • json - Each log entry is written as a separate JSON line (JSONL format)
    • multijson - All log entries are written as a single JSON array
    • avro - Apache Avro format with schema
    • parquet - Apache Parquet columnar format with schema

    Compression options like ZSTD, GZIP, Snappy, Brotli, and LZ4 are also supported. Additional features include dynamic file naming, size-based rotation, buffer management, and schema validation.

Cloud

The integration options for the cloud are below:

  • Azure Blob - Direct blob writing and multiple containers are supported. Available authentication methods are service principal and managed Identity. Other features include automatic retries, exponential backoff, size-based chunking, connection pooling, and buffer management.

  • Microsoft Sentinel - Direct DCR integration and ASIM normalization are supported. In addition to standard tables, WindowsEvent, SecurityEvent, CommonSecurityLog, and Syslog can be used. Various ASIM tables are also available. (See the ASIM section for a complete list.)

Redundant Delivery

For critical data flows, implement redundant delivery patterns:

  • Primary and backup targets for important data
  • Multi-region storage for disaster recovery
  • Disk buffer with forward-on-recovery capability

Routing Optimization

Configure intelligent routing based on data characteristics:

  • Route high-volume data to scalable storage targets
  • Send critical alerts to real-time notification endpoints
  • Direct compliance data to specialized archival systems