Route Data
Lastly, now that our Pipeline is configured and we're reshaping data the way we'd like, let's set up a Route to select a subset of our data to manipulate, and send it down the splunk‑metrics
Pipeline.
Configure Route
- If necessary, select the
Processing
submenu and clickPipelines
, then give thesplunk-metrics
Pipeline focus. - At the top, click
Attach to Route
. - To the right of the
default
route, click...
and clickInsert Route Above
(This inserts your new Route at the top, so Stream will process it first.) - For
Route Name
, enterSplunk Metrics
. - For
Filter
, entersource.endsWith('metrics.log')
. - For
Pipeline
, selectsplunk-metrics
. - For
Output
, selects3:s3
. - Set the
Final
slider toNo
. - Click
Save
.
By this point, most things above should be pretty self-explanatory. Route Name
is descriptive. Filter
is a filter expression – like those we've used before – which finds events where the source
field endsWith()
metrics.log
. Pipeline
is set to the Pipeline we were just working on, and Output
is set to go to the s3
Destination we created earlier. The only thing which bears explanation is Final
, an important concept in Stream.
If Final
is true
, the event is consumed and sent down the Pipeline for processing. If Final
is set to false
, a copy of the event is sent down the Pipeline, while the original can be evaluated and matched by further Routes.
This centralizes, for the most part, the branching and routing decisions in the product. The Data Routes page gives you a centralized place for troubleshooting. It also provides a simple facility for treating data different ways – depending on where you might be sending it, or on whether you want it in multiple shapes.
Now, let's see if we are indeed dropping events.
Monitoring
- In the top nav, Click
Monitoring
(Note: Depending on the size of your window, the top nav will consolidate items that won't fit in a pulldown represented by an ellipsis (...
) - if so, click on the ellipsis and then click onMonitoring
.). - In the
Monitoring
submenu, click on theData
pull down, and then clickPipelines
.
Note that the number of events coming into the splunk‑metrics
Pipeline should be greater than the number of events going out.
Now, we should be seeing events showing up in our S3 bucket. Let's validate, via the terminal, what data is getting deposited.
Validate Output
- Look at just one record from the
splunk_metrics
sourcetype, to verify that its event shape mirrors what we had seen in the Preview pane. Copy/paste this command into the terminal:mc find minio/logarchives -path '*/splunk_metrics/*.json' -exec "mc cat {}" 2>/dev/null | head -1 | jq .
- Next, run this command. Depending on how long it's been since you saved the route, its output should report at least 20, up to a few hundred events:
mc find minio/logarchives -path '*/splunk_metrics/*.json' -exec "mc cat {}" 2>/dev/null | wc -l
That's it! We're successfully routing data from a Splunk Universal Forwarder; parsing, reshaping and enriching it; and then delivering it to a S3 bucket!
So, on to our conclusion.