The source page of the structured log file policy editor enables you to specify which log file the policy reads. You can also configure the policy to extract data, to which the log file structure is applied, from the log file. The policy retains that structured data for later reuse in other pages of the policy editor.
In the BSM Connector user interface, click in the toolbar. Then click Event >
Structured Log File.
In the BSM Connector user interface, click in the toolbar. Then click Metrics >
Structured Log File.
In the BSM Connector user interface, click in the toolbar. Then click Generic output >
Structured Log File.
Alternatively, double-click an existing policy to edit it.
Click Source to open the policy Source page.
A log file structure is defined by using the OM pattern-matching language, so that the dynamic parts of the text-based events can be extracted from any log file row, assigned to variables, and then used as parameters to build the event description or to set other attributes. For more information on the OM pattern matching in policy rules, see Pattern Matching in Policy Rules.
You can use an asterisk (<*>) wildcard in the Log File Path / Name field to match multiple file names. For example, to match the source file names events.1.log
and events.2.log
, use the pattern <path>/events<*>.xml
in the Log File Path / Name field. Note that the <*> wildcard is the only supported OM pattern in log file paths. For more information on pattern matching, see Pattern-Matching Details.
Example 1: Use OM pattern-matching language to extract the log file structure from the following log file line:
Mon, 28 Jul 2014 23:19:29 GMT;SEVERE;frogproc;123456;ERR-123;failed connect to db ‘pond’
This is done by defining the fields from which the log file line is logically constituted, and allocating the correspondent variables by which these fields can be organized within a structure. The log file line in this example is constituted logically from the following fields:
timestamp;severity;processname;pid;errorcode;errortext
Allocate the appropriate variable extractions to all fields by using the OM pattern-matching language, as follows:
<*.timestamp>;<*.severity>;<*.processname>;<*.pid>;<*.errorcode>;<*.errortext>
Now each field from the log file line can be identified by the variable name, that can be also used in all subsequent policy operations, such as mappings, default attributes and rules.
For example, when setting the Title
field of the event attribute with the value of the errortext
field, you should enter <$DATA:errortext>
in the Title
field of the Event Attributes tab of the editor, or you could just drag the errortext
property from the Sample Data tab to the Title
field .
In the Rules tab the field is simply referred to as errortext
in the Property
field.
In addition to defining a log file structure by using the OM pattern-matching language, you can identify a log file structure by using the static fields.
Example 2: Static fields are actually word lists of nonrecurring data from the log file separated by comma. In case only one metric per line is present, all fields can be addressed. For example:
Use static fields to extract the log file structure from the following log file line:
1380004749|tcpc113.RIESLING.INTERN|LogicalDisk|C:|% Free Space|66.379264831543|Microsoft.Windows.Server.2008.LogicalDisk
This is done by defining the fields from which the log file line is logically constituted, and by using these fields as static fields with the defined field separator char. The log file line in this example is constituted logically from the following fields:
timestamp|hostname|entitytype|entityid|countername|countervalue|scomtype
The corresponding static fields should be entered as follows:
timestamp,hostname,entitytype,entityid,countername,countervalue,scomtype
Field separator char is a pipe symbol (|
).
Note that the static fields require comma instead of the pipe symbol as a delimiter.
Note: This is actually the recommended method, due to performance reasons.
The "Recurring fields" configuration parameter is useful in case when more than one performance value is present within a single log file line. This is actually a word list that contains the recurring part from the log line. Each recurrence creates a record in the store.
Example 3: Extract the log file structure from the following log file lines by using also the reccuring fields:
1380004749|tcpc113.RIESLING.INTERN|LogicalDisk|C:|% Free Space|66.379264831543|Current Disk Queue Length|0|Avg. Disk sec/Transfer|0.000484383897855878
1380004748|tcpc113.RIESLING.INTERN|Network Interface|10|Bytes Total/sec|55230.0703125|Current Bandwidth|1000000000
This is done by defining the fields from which the log file line is logically constituted, and then identifying which of them can be addressed as static fields, and which can be described as a variable part that consists of an arbitrary number of countername-countervalue pairs. These are the recurring fields. The log file lines in this example are constituted logically from the following fields:
timestamp|hostname|entitytype|entityid|countername_1|countervalue_1|countername_2|countervalue_2|countername_3|countervalue_3
timestamp|hostname|entitytype|entityid|countername_1|countervalue_1|countername_2|countervalue_2
The corresponding static fields should be entered as follows:
timestamp,hostname,entitytype,entityid
In addition, enter the following recurring fields:
countername,countervalue
Field separator char is a pipe symbol (|
).
Static fields can also be specified by using the OM pattern-matching language. However, this is not the recommended method, because of the performance reasons. The syntax is as follows:
<*.timestamp>\|<*.hostname>\|<*.entitytype>\|<*.entityid>
Line start indicator enables you to differentiate the structured log file entries based on their logical relationship, regardless of their span in the log file. When the log entry that represents a single logical unit has a span of more than one line in the log file, you can easily differentiate it from other log entries by identifying a line start indicator from a log file, and then specifying the matched line start pattern by using the OM pattern matching language.
For example, the following tomcat.log
file excerpt contains four logically separated log entries that are expanded over multiple log lines, however all of them start with timestamp (May 19, 2015 2:39:01 PM):
May 19, 2015 2:39:01 PM org.apache.catalina.core.StandardService initInternal
SEVERE: Failed to initialize connector [Connector[HTTP/1.1-30000]] org.apache.catalina.LifecycleException: Failed to initialize component [Connector[HTTP/1.1-30000]] at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:106) at org.apache.catalina.core.StandardService.initInternal(StandardService.java:559) at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102) at org.apache.catalina.core.StandardServer.initInternal(StandardServer.java:821) at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102) at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102) ... 12 more
May 19, 2015 2:39:01 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 3622 ms May 19, 2015 2:39:01 PM org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina May 19, 2015 2:39:01 PM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: HP OpenView TomcatB
Therefore, in this case, the line start indicator must match the timestamp pattern, as follows:
<*> <#>, <4#> <#>:<#>:<#> <2*>
In this instance,the following applies:
Log text | May | 19 | 2015 | 2 | 39 | 01 | PM |
OM Pattern | <*>
|
<#>
|
<4#>
|
<#>
|
<#>
|
<#>
|
<2*>
|
Description | Matches string | Matches digit(s) | Matches 4 digit(s) | Matches digit(s) | Matches digit(s) | Matches digit(s) | Matches string with length 2 |
Note: The punctuation marks and spaces in the line start pattern represent the static strings derived from the log file text.
This task describes how to configure the structured log file source file and how the policy reads it.
Type the full path to the log file on the BSM Connector system.
Click to load a sample log file. You can load a sample file from the BSM Connector system or from the system where the Web browser runs.
When you load sample data, BSM Connector replaces already loaded data with the new data. This does not affect any mappings that are defined based on previously available sample data.
For the Event integration only:
For the Metric integration only: In the Logfile Structure field:
UI Element | Description | ||||||
---|---|---|---|---|---|---|---|
Structured Logfile source | |||||||
Log File Path / Name |
Path and name of the structured log file that the policy reads. Note: BSM Connector cannot process log files that are larger than 2 GB. |
||||||
Polling Interval |
Determines how often the policy reads the structured log file (in days, hours, minutes, and seconds). This period of time is the polling interval. The larger polling interval is set, the less performance is needed. However, more memory is used (this depends on the amount of the data in the log file). Setting the polling interval below 30 seconds is not recommended, the default setting is usually appropriate. Note that a policy begins to evaluate data after the first polling interval passes, unless the Default value: 5 minutes Note: Make sure that you set this value to a minimum of |
||||||
Logfile Character Set |
Name of the character set used by the structured log file that the policy reads. Note: It is important to choose the correct character set. If the character set that the policy is expecting does not match the character set in the structured log file, pattern matching may not work, and the event details can have incorrect characters or be truncated in OMi. If you are unsure of which character set is used by the structured log file that the policy reads, consult the documentation of the program that writes the file. Default value: UTF-8 |
||||||
Send event if log file does not exist |
BSM Connector sends an event if the specified structured log file does not exist. Default value: not selected |
||||||
Close after reading |
If you select this option, the file handle of the structured log file closes and reopens again after the polling interval. The file is read from the last position. If this file had a rollover in the meantime, it is read from the beginning. If the name of the structured log file changes, and a new file was started in the meantime, the policy continues to read the new structured log file and the original structured log file data is lost. If you do not select this option, the file handle remains and is read entirely each time, unless there is a newer file with the same name (or name pattern). In that case the original structured log file is read to the end, and then the newer file is read. Therefore, no data is lost. Consider the following example: a policy reads the structured log file In case this option is selected, the unread data from the In case this option is not selected, the unread data from the new Default value: not selected |
||||||
Read Mode |
The read mode of a structured log file policy indicates whether the policy processes the entire file or only new entries.
Note: Every policy reads the same structured log files independently from any other policies. This means, for example, that if "Policy 1" with read mode Read from beginning (first time) is activated and "Policy 2" with the same read mode already exists, "Policy 1" still reads the entire file after it has been activated. Default value: Read from last position |
||||||
Sample Data | |||||||
![]() |
Loads the log file into BSM Connector. The log file can be loaded from Server or from the local file system. Note: BSM Connector can only load a maximum of 50 MB of sample data. |
||||||
![]() |
Opens the Structured logfile sample data dialog box. This dialog box contains the following tabs:
|
||||||
Logfile Structure | |||||||
Log File Pattern (for events only) |
A pattern by which the log file's structure is extracted, and which will be used in all other policy operations.This pattern should comply with the standard pattern definition used by all HP Operations Managerproducts (OM pattern). For example, this structure could like as follows: For more information about how this structure is extracted from a log file, see Configuring Data Source in Structured Log File Policies. For more information about the OM pattern matching details, see Pattern Matching in Policy Rules. |
||||||
Data Fields (for metrics only) |
|
||||||
|
|||||||
Recurring Fields (for metrics only) |
A word list that contains the recurring part from the log line. Each recurrence creates a record in the store. For example:
For more information about the recurring fields, see Using recurring fields in defining a log file structure (Metric only). |
||||||
Data Field Separator
(for metrics only) |
The separator that is used as a data separator in the log file. | ||||||
Line Start Indicator (for events only) | |||||||
Line Start Pattern
|
This field enables you to differentiate the structured log file entries based on their logical relationship, regardless of their span in the log file. You can do this by identifying a line start indicator from a log file, and then specifying the matched line start pattern by using the OM pattern matching language. For more information, see Setting the Line Start Indicator (Event only). |