I would like to know what you are trying to achieve. What’s the desired output of the whole configuration?
As far as I can tell, the tag “counter” exists only in the measurement “sqlserver_performance”, which is not included in the namepass filter, and therefore not considered.
Also, that tagpass filter is excluding the other two metrics (schedulers and waitstats) because they do not contain the “counter” tag, that’s why you don’t have data.
tagpass : A table mapping tag keys to arrays of glob pattern strings. Only metrics that contain a tag key in the table and a tag value matching one of its patterns is emitted.
maybe the taginclude filter will be better for this, but you will have to test a bit since you must ensure that different measurements are passed/filtered in different ways.
a last note, if you don’t need a measurement. use the “exclude_query” config provided by the plugin, in this way the system won’t be queried at all, while by using namepass/namedrop/etc you will get the data but then discard it.
In fact, I don’t see any inefficiency, the points are already as optimized as they can be.
The merge aggregator won’t help in any way since those metrics can’t be aggregated, as the docs states:
Use this plugin when fields are split over multiple metrics, with the same measurement, tag set and timestamp. By merging into a single metric they can be handled more efficiently by the output.
The measurement is not the same, in fact, you have at least 3 different measurement
Even if you rename the measurement, the points do not share the same tag set (all the tag keys and values), each measurement may have his specific tags (ie: “counter” in the performance measurement), and without them, the data will lose context and be useless.
If you are having performance issues the problem might be elsewhere.
Current Situation
No, that’s not possible using the merge aggregator because “counter” is a tag with different values.
The tag key is the same, but the tag value is different, meaning that the two points are part of different series, a series is identified by measurement, tag set and timestamp.
In order to achieve what you expect the whole structure of the measurement should be changed (a pretty breaking change), by pivoting the counter, which will result in the counters being saved as different columns instead of different rows.
I will also add that this is surely not going to happen soon (or at all) in the sqlserver plugin itself.
Possible Solution
What you can do is use the pivot processor in order to achieve it though.
If you use this solution, the better option is to drop the existing measurement in order to recreate it on the first run, otherwise you will end up with useless tags and fields (“counter” and “value”)
I hope this is my last question about the metrics:
I have 3 tasks in the scheduler for my backup. 1. Full backup. 2. Incremental backup, 3. Transact. Log backup. They all start at different intervals. From which metrics can I read that the tasks have been successfully completed?
You can create a custom one though (i.e. using PowerShell) to query the backup related tables and get backup related data or create a query to get the job history data.
An example of custom gathering for SQL Server using Powershell and the exec plugin here:
with the same user (that runs the telegraf service)
on the same machine (maybe there are “execution policy” limitation or an outdate powershell version)
Run the script from powershell (to check for PS errors)
Run the script from cmd, as telegraf does (ie, from the cmd console run: powershell -File “PathToPowerShellFile”.
Set telegraf in the simplest possible way and log everytihng:
enable debug logging
set up a config file to run only the minimum, in this case exec input and an output
use a test script
You can also create a Powershell script that returns a fixed string to check if the problem is the script, or the configuration.
sample below:
#Put this in a .ps1 file and execute it, it should output the data in the "Test" measurement
Write-Host 'Test,Tag1="dog",Tag2="nice" Field1="random text",Field2=12'
Note:
If nothing happens and you don’t see any error there might still be an error, in fact that examples do not uses exit codes, which is what telegraf use to get errors. (I still don’t have a solution with proper exit code management). In this case, the best way I’ve found is to apply the steps listed at point 1…
If you still don’t get any result post here the exec input config and a sample of input/output
I think I found the problem (but not a solution). When I start Telegraf.exe via powershell it works! The metrics from your ps1 file are written to the output file.
.\telegraf.exe --config .\telegraf.conf
But if I start Telegraf as a service, net start telegraf
he won’t execute the ps1 script.
The telegraf Service would be started with a MSA Account (Managed Service Account). This account also has the privilege to execute the ps1 file. (See Screenshots)
I don’t know waht the issue is, but it might be related to the paths used. Always use absolute path (to the ps1 file), in fact whe you run sometihng as a service it’s root folder is system32, and any relative path will start form there.
But apparently it can not be converted to the InfluxString. When I execute the ps1 file, I get no data back
$Datasource = "MSSQL-SA\INSTANCE01"
$Database = "Nordwind"
$User = "TestUser"
$Password = "TestPassword"
$connectionString = "Server=$Datasource;uid=$User; pwd=$Password;Database=$Database;Integrated Security=true;"
# Create an Connection Object
$Connection = New-Object System.Data.SqlClient.SqlConnection
# ConnectionString pass to Connection Object
$Connection.ConnectionString = $connectionString
# Open the Connection
$Connection.Open()
# SQL Query
$Query = "SELECT sdb.Name AS DatabaseName, COALESCE(CONVERT(VARCHAR(12), MAX(bus.backup_finish_date), 101),'-') AS LastBackUpTime
FROM sys.sysdatabases sdb
LEFT OUTER JOIN msdb.dbo.backupset bus ON bus.database_name = sdb.name
GROUP BY sdb.Name"
# Command Object for Query
$Befehl = $Connection.CreateCommand()
# Query as CommandText
$Befehl.CommandText = $Query
# Query as Reader
$Resultat = $Befehl.ExecuteReader()
# Create an DataTable Object for the SQL Data
$Backups = new-object System.Data.DataTable
# load the Result in the DataTable
$Backups.Load($Resultat)
# Close the Connection to SQL Server
$Connection.Close()
## How many Rows are in
#$Backups.Rows.Count
## Ausgabe der Daten
$Backups = @{label="Backups";expression={$_.Row}}
# Define the Metrics
$Measurement = "LastBackup"
$TagList = ""
$MetricList = "Backups"
$BackupData | Select-Object $Backups | ConvertTo-Metric -Measure $Measurement -MetricProperty $MetricList | ConvertTo-InfluxLineString -WarningAction SilentlyContinue
I had a quick look at it, there were a bunch of errors (on the Powershell side).
You can find the script below, with some edits: (which you can remove of course)
I’ve used current user AD authentication (telegraf service once this runs, no usr and pw required in the file)
commented out some lines, also I’ve added some comments to point out errors or give examples
slightly changed the query to match the other measurement naming convention and provide the “LastBk” value as string and as an actual date (unix time)
added the -ExcludeEmptyMetric (to ConvertTo-InfluxLineString) switch parameter, in fact you cannot send null value to influx
Added tags
$Datasource = "QDLP03\SQL2012"
$Database = "Master"
#$User = "TestUser"
#$Password = "TestPassword"
#$connectionString = "Server=$Datasource;uid=$User; pwd=$Password;Database=$Database;Integrated Security=true;"
$connectionString = "Server=$Datasource;;Database=$Database;Integrated Security=true;"
# Create an Connection Object
$Connection = New-Object System.Data.SqlClient.SqlConnection
# ConnectionString pass to Connection Object
$Connection.ConnectionString = $connectionString
# Open the Connection
$Connection.Open()
# SQL Query
$Query = "SELECT
REPLACE(@@SERVERNAME,'\',':') AS [sql_instance]
,sdb.Name AS [database]
,COALESCE(CONVERT(VARCHAR(12), MAX(bus.backup_finish_date), 101),'-') AS [last_backup_time_str]
,CAST( DATEDIFF(second,{d '1970-01-01'}, MAX(bus.backup_finish_date)) AS bigint) *100000 AS [last_backup_time_unix_ns]
FROM sys.sysdatabases sdb
LEFT OUTER JOIN msdb.dbo.backupset bus
ON bus.database_name = sdb.name
GROUP BY
sdb.Name"
# Command Object for Query
$Befehl = $Connection.CreateCommand()
# Query as CommandText
$Befehl.CommandText = $Query
# Query as Reader
$Resultat = $Befehl.ExecuteReader()
# Create an DataTable Object for the SQL Data
$Backups = new-object System.Data.DataTable
# load the Result in the DataTable
$Backups.Load($Resultat)
# Close the Connection to SQL Server
$Connection.Close()
## How many Rows are in
#$Backups.Rows.Count
## Ausgabe der Daten
# $Backups = @{label="Backups";expression={$_.Row}} #!! overrides the data object
# To be used in a Select-Object, an obj of type label that gets the value from another property
# Samples below
# $MyLabelObj = @{label="MyCustomLabel";expression={$_.ExistingObjProperty}}
# $BackupDate = @{label="backup_date";expression={$_.last_backup_time_str}}
# $DatabaseName = @{label="database";expression={$_.database}}
# Define the Metrics
$Measurement = "LastBackup"
$TagList = "sql_instance","database"
$MetricList = "last_backup_time_str","last_backup_time_unix_ns"
# $BackupData does not exist
# $BackupData | Select-Object $Backups | ConvertTo-Metric -Measure $Measurement -MetricProperty $MetricList | ConvertTo-InfluxLineString -WarningAction SilentlyContinue
# -ExcludeEmptyMetric has been added, influx cannot receive NULL values
$Backups | ConvertTo-Metric -Measure $Measurement -MetricProperty $MetricList -TagProperty $TagList | ConvertTo-InfluxLineString -ExcludeEmptyMetric -WarningAction SilentlyContinue
Some additional notes/suggestions:
influxDB has data types, so check which data type it will use for each field, otherwise, type conflict may occur (in the future)
you may want to manage NULL values yourself (in the query)
since you are using a query, manage the names in there (column names, which then become tags and field names), this will allow you to have less to manage in Powershell. The renaming method using PS labels is useful when you cannot control the names yourself.