Lookup
In this section, we're going to use the Lookup
Function to enrich an event, and then use it to make decisions on our data. You might want to enrich an event with data from a threat list, or only allow traffic through from a certain list of network blocks. Lookup
enables us to add that context, to use when making decisions.
First, let's create a Lookup file in the UI. For this example, we're going to create a lookup file with just a couple of entries. We'll look up the orderType
field against this file.
important
Add Lookup
Select the
Processing
submenu and clickKnowledge
.Note: Depending on the size of your window, the top nav will consolidate items that won't fit in a pulldown represented by an ellipsis (
...
) - if so, click on the ellipsis and then selectProcesing
and click onKnowledge
.Click
Lookups
at the left.Click
Add New
at the top right.From the drop-down, choose
Create with Text Editor
.Under
Filename
, enteraddOrders.csv
.In the right pane (one big field), paste the following table:
orderType,addOrder NewActivation,yes AddLOS,yes
Click
Save
.
This lookup table shows off one of the simplest use cases for Lookups, which is to enrich events based on a match, to support further routing or filtering logic. In this case, we're going to lookup based on the orderType
field, and if we find addOrder
, we're going to drop anything that doesn't match NewActivation
or AddLOS
.
Next, we need to add our Lookup
Function to the Pipeline.
important
Add Lookup to Pipeline
- Select the
Processing
submenu and click onPipelines
. - Select
business_event
underPipelines
.
To display this column's header and contents, you might need to drag the pane and column dividers toward the right. - In the right pane, make sure
Sample Data
has focus. - Click
Simple
next to thebe.log
capture. - At left, click
+ Function
, search forLookup
, and click it. - For the new
Lookup
Function'sFilter
, entersourcetype=='business_event'
. - Click in
Lookup file path
to open its drop-down. - Select
addOrders.csv
, the lookup file you created in the previous section. - Set
Lookup Field Name in Event
toorderType
. - Click
Save
.
When you're done, your lookup should look like this:
Now, we're going to drop all events which do not match.
important
Drop Events Without addOrder
- Click
+ Function
, search forDrop
, and then click it. - Scroll down and click into the new
Drop
Function. - Paste this into the
Drop
Function'sFilter
field:sourcetype=='business_event' && !addOrder
- Click
Save
.
What this is saying, in JavaScript, is: If sourcetype
is business_event
, and if addOrder
– which we retrieved from the lookup – is not truthy (and it will not be, if the lookup matches), then we want to drop the event. This could also be written as addOrder!=='yes'
to match an exact string, or addOrder !== undefined
to make sure it's undefined.
Now, all events which are not of orderType
:NewActivation
or orderType
:AddLOS
will be dropped. You can see this in the Preview pane. Let's validate with Capture that filtered events have only those two orderType
values.
important
Run a Capture
- Click
Sample Data
. - Click
Capture New
. - In the
Capture Sample Data
dialog, replace theFilter Expression
field's contents withcribl_pipe=='business_event'
. - Click
Capture
. - Under
Where to capture
selectBefore the post‑processing Pipeline
. - Click
Start
.
By default, Capture grabs events right as they come into Cribl. We set this capture to instead run after our processing Pipeline runs, meaning that we've already run the Lookup
and Drop
Functions. The events returned should have an orderType
value of only NewActivation
or AddLOS
.
For our next section, we want all events back.
important
Disable Drop
Function
- Click
Cancel
to close theCapture Sample Data
modal. - In the
Drop
Function's header row, slide theOn
toggle toOff
. - Click
Save
.
This toggle is an example of retaining a Function in a Pipeline's structure, but disabling it to debug – or in our case, to refine – the Pipeline's behavior.
Next, we're going to explore techniques to control data volume or to completely reshape data – like suppression, sampling, and aggregation.