Adding upstream version 1.34.4.
Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
parent
e393c3af3f
commit
4978089aab
4963 changed files with 677545 additions and 0 deletions
402
plugins/inputs/intel_pmt/README.md
Normal file
402
plugins/inputs/intel_pmt/README.md
Normal file
|
@ -0,0 +1,402 @@
|
|||
# Intel® Platform Monitoring Technology Input Plugin
|
||||
|
||||
This plugin collects metrics via the Linux kernel driver for
|
||||
Intel® Platform Monitoring Technology (Intel® PMT), an architecture capable of
|
||||
enumerating and accessing hardware monitoring capabilities on supported devices.
|
||||
|
||||
⭐ Telegraf v1.28.0
|
||||
🏷️ hardware, system
|
||||
💻 linux
|
||||
|
||||
## Requirements
|
||||
|
||||
- supported device
|
||||
- Linux kernel >= 5.11
|
||||
- `intel_pmt_telemetry` module loaded (on kernels 5.11-5.14)
|
||||
- `intel_pmt` module loaded (on kernels 5.14+)
|
||||
|
||||
This plugin supports devices exposing PMT, e.g.
|
||||
|
||||
- 4th Generation Intel® Xeon® Scalable Processors (Sapphire Rapids / SPR)
|
||||
- 6th Generation Intel® Xeon® Scalable Processors (Granite Rapids / GNR)
|
||||
|
||||
Support has been added to the mainline Linux kernel under the platform driver
|
||||
(`drivers/platform/x86/intel/pmt`) which exposes the Intel PMT telemetry space
|
||||
as a sysfs entry at `/sys/class/intel_pmt/`. Each discovered telemetry
|
||||
aggregator is exposed as a directory (with a `telem` prefix) containing a `guid`
|
||||
identifying the unique PMT space. This file is associated with a set of XML
|
||||
specification files which can be found in the [Intel-PMT Repository][repo].
|
||||
The XML specification must be specified as an absolute path to the `pmt.xml`
|
||||
file using the `spec` setting .
|
||||
|
||||
This plugin discovers and parses the telemetry data exposed by the kernel driver
|
||||
using the specification inside the XML files. Furthermore, the plugin then reads
|
||||
low level samples/counters and evaluates high level samples/counters according
|
||||
to transformation formulas, and then reports the collected values.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> PMT space is located in `/sys/class/intel_pmt` with `telem` files requiring
|
||||
> **root privileges** to be read. If Telegraf is not running as root you should
|
||||
> add the following capability to the Telegraf executable:
|
||||
>
|
||||
> ```sh
|
||||
> sudo setcap cap_dac_read_search+ep /usr/bin/telegraf
|
||||
> ```
|
||||
|
||||
[repo]: https://github.com/intel/Intel-PMT
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Intel Platform Monitoring Technology plugin exposes Intel PMT metrics available through the Intel PMT kernel space.
|
||||
# This plugin ONLY supports Linux.
|
||||
[[inputs.intel_pmt]]
|
||||
## Filepath to PMT XML within local copies of XML files from PMT repository.
|
||||
## The filepath should be absolute.
|
||||
spec = "/home/telegraf/Intel-PMT/xml/pmt.xml"
|
||||
|
||||
## Enable metrics by their datatype.
|
||||
## See the Enabling Metrics section in README for more details.
|
||||
## If empty, all metrics are enabled.
|
||||
## When used, the alternative option samples_enabled should NOT be used.
|
||||
# datatypes_enabled = []
|
||||
|
||||
## Enable metrics by their name.
|
||||
## See the Enabling Metrics section in README for more details.
|
||||
## If empty, all metrics are enabled.
|
||||
## When used, the alternative option datatypes_enabled should NOT be used.
|
||||
# samples_enabled = []
|
||||
```
|
||||
|
||||
### Enabling metrics
|
||||
|
||||
By default, the plugin collects all available metrics.
|
||||
|
||||
To limit the metrics collected by the plugin,
|
||||
two options are available for selecting metrics:
|
||||
|
||||
- enable by datatype (groups of metrics),
|
||||
- enable by name.
|
||||
|
||||
It's important to note that only one enabling option
|
||||
should be chosen at a time.
|
||||
|
||||
See the table below for available datatypes and related metrics:
|
||||
|
||||
| Datatype | Metric name | Description |
|
||||
|-------------------------|-------------------------|-----------------------------------------------------------------------------------------------------------------------------|
|
||||
| `txtal_strap` | `XTAL_FREQ` | Clock rate of the crystal oscillator on this silicon |
|
||||
| `tdram_energy` | `DRAM_ENERGY_LOW` | DRAM energy consumed by all DIMMS in all Channels (uJ) |
|
||||
| | `DRAM_ENERGY_HIGH` | DRAM energy consumed by all DIMMS in all Channels (uJ) |
|
||||
| `tbandwidth_32b` | `C2U_BW` | Core to Uncore Bandwidth (per core and per uncore) |
|
||||
| | `U2C_BW` | Uncore to Core Bandwidth (per core and per uncore) |
|
||||
| | `PC2_LOW` | Time spent in the Package C-State 2 (PC2) |
|
||||
| | `PC2_HIGH` | Time spent in the Package C-State 2 (PC2) |
|
||||
| | `PC6_LOW` | Time spent in the Package C-State 6 (PC6) |
|
||||
| | `PC6_HIGH` | Time spent in the Package C-State 6 (PC6) |
|
||||
| | `MEM_RD_BW` | Memory Read Bandwidth (per channel) |
|
||||
| | `MEM_WR_BW` | Memory Write Bandwidth (per channel) |
|
||||
| | `DDRT_READ_BW` | DDRT Read Bandwidth (per channel) |
|
||||
| | `DDRT_WR_BW` | DDRT Write Bandwidth (per channel) |
|
||||
| | `THRT_COUNT` | Number of clock ticks when throttling occurred on IMC channel (per channel) |
|
||||
| | `PMSUM` | Energy accumulated by IMC channel (per channel) |
|
||||
| | `CMD_CNT_CH0` | Command count for IMC channel subchannel 0 (per channel) |
|
||||
| | `CMD_CNT_CH1` | Command count for IMC channel subchannel 1 (per channel) |
|
||||
| `tU32.0` | `PEM_ANY` | Duration for which a core frequency excursion occurred due to a listed or unlisted reason |
|
||||
| | `PEM_THERMAL` | Duration for which a core frequency excursion occurred due to EMTTM |
|
||||
| | `PEM_EXT_PROCHOT` | Duration for which a core frequency excursion occurred due to an external PROCHOT assertion |
|
||||
| | `PEM_PBM` | Duration for which a core frequency excursion occurred due to PBM |
|
||||
| | `PEM_PL1` | Duration for which a core frequency excursion occurred due to PL1 |
|
||||
| | `PEM_RESERVED` | PEM Reserved Counter |
|
||||
| | `PEM_PL2` | Duration for which a core frequency excursion occurred due to PL2 |
|
||||
| | `PEM_PMAX` | Duration for which a core frequency excursion occurred due to PMAX |
|
||||
| `tbandwidth_28b` | `C0Residency` | Core C0 Residency (per core) |
|
||||
| | `C1Residency` | Core C1 Residency (per core) |
|
||||
| `tratio` | `FET` | Current Frequency Excursion Threshold. Ratio of the core frequency. |
|
||||
| `tbandwidth_24b` | `UFS_MAX_RING_TRAFFIC` | IO Bandwidth for DMI or PCIE port (per port) |
|
||||
| `ttemperature` | `TEMP` | Current temperature of a core (per core) |
|
||||
| `tU8.0` | `VERSION` | For SPR, it's 0. New feature versions will uprev this. |
|
||||
| `tebb_energy` | `FIVR_HBM_ENERGY` | FIVR HBM Energy in uJ (per HBM) |
|
||||
| `tBOOL` | `OOB_PEM_ENABLE` | 0x0 (Default)=Inband interface for PEM is enabled. 0x1=OOB interface for PEM is enabled. |
|
||||
| | `ENABLE_PEM` | 0 (Default): Disable PEM. 1: Enable PEM |
|
||||
| | `ANY` | Set if a core frequency excursion occurs due to a listed or unlisted reason |
|
||||
| | `THERMAL` | Set if a core frequency excursion occurs due to any thermal event in core/uncore |
|
||||
| | `EXT_PROCHOT` | Set if a core frequency excursion occurs due to external PROCHOT assertion |
|
||||
| | `PBM` | Set if a core frequency excursion occurs due to a power limit (socket RAPL and/or platform RAPL) |
|
||||
| | `PL1` | Set if a core frequency excursion occurs due to PL1 input from any interfaces |
|
||||
| | `PL2` | Set if a core frequency excursion occurs due to PL2 input from any interfaces |
|
||||
| | `PMAX` | Set if a core frequency excursion occurs due to PMAX |
|
||||
| `ttsc` | `ART` | TSC Delta HBM (per HBM) |
|
||||
| `tproduct_id` | `PRODUCT_ID` | Product ID |
|
||||
| `tstring` | `LOCAL_REVISION` | Local Revision ID for this product |
|
||||
| | `RECORD_TYPE` | Record Type |
|
||||
| `tcore_state` | `EN` | Core x is enabled (per core) |
|
||||
| `thist_counter` | `FREQ_HIST_R0` | Frequency histogram range 0 (core in C6) counter (per core) |
|
||||
| | `FREQ_HIST_R1` | Frequency histogram range 1 (16.67-800 MHz) counter (per core) |
|
||||
| | `FREQ_HIST_R2` | Frequency histogram range 2 (801-1200 MHz) counter (per core) |
|
||||
| | `FREQ_HIST_R3` | Frequency histogram range 3 (1201-1600 MHz) counter (per core) |
|
||||
| | `FREQ_HIST_R4` | Frequency histogram range 4 (1601-2000 MHz) counter (per core) |
|
||||
| | `FREQ_HIST_R5` | Frequency histogram range 5 (2001-2400 MHz) counter (per core) |
|
||||
| | `FREQ_HIST_R6` | Frequency histogram range 6 (2401-2800 MHz) counter (per core) |
|
||||
| | `FREQ_HIST_R7` | Frequency histogram range 7 (2801-3200 MHz) counter (per core) |
|
||||
| | `FREQ_HIST_R8` | Frequency histogram range 8 (3201-3600 MHz) counter (per core) |
|
||||
| | `FREQ_HIST_R9` | Frequency histogram range 9 (3601-4000 MHz) counter (per core) |
|
||||
| | `FREQ_HIST_R10` | Frequency histogram range 10 (4001-4400 MHz) counter (per core) |
|
||||
| | `FREQ_HIST_R11` | Frequency histogram range 11 (greater then 4400 MHz) (per core) |
|
||||
| | `VOLT_HIST_R0` | Voltage histogram range 0 (less then 602 mV) counter (per core) |
|
||||
| | `VOLT_HIST_R1` | Voltage histogram range 1 (602.5-657 mV) counter (per core) |
|
||||
| | `VOLT_HIST_R2` | Voltage histogram range 2 (657.5-712 mV) counter (per core) |
|
||||
| | `VOLT_HIST_R3` | Voltage histogram range 3 (712.5-767 mV) counter (per core) |
|
||||
| | `VOLT_HIST_R4` | Voltage histogram range 4 (767.5-822 mV) counter (per core) |
|
||||
| | `VOLT_HIST_R5` | Voltage histogram range 5 (822.5-877 mV) counter (per core) |
|
||||
| | `VOLT_HIST_R6` | Voltage histogram range 6 (877.5-932 mV) counter (per core) |
|
||||
| | `VOLT_HIST_R7` | Voltage histogram range 7 (932.5-987 mV) counter (per core) |
|
||||
| | `VOLT_HIST_R8` | Voltage histogram range 8 (987.5-1042 mV) counter (per core) |
|
||||
| | `VOLT_HIST_R9` | Voltage histogram range 9 (1042.5-1097 mV) counter (per core) |
|
||||
| | `VOLT_HIST_R10` | Voltage histogram range 10 (1097.5-1152 mV) counter (per core) |
|
||||
| | `VOLT_HIST_R11` | Voltage histogram range 11 (greater then 1152 mV) counter (per core) |
|
||||
| | `TEMP_HIST_R0` | Temperature histogram range 0 (less then 20°C) counter |
|
||||
| | `TEMP_HIST_R1` | Temperature histogram range 1 (20.5-27.5°C) counter |
|
||||
| | `TEMP_HIST_R2` | Temperature histogram range 2 (28-35°C) counter |
|
||||
| | `TEMP_HIST_R3` | Temperature histogram range 3 (35.5-42.5°C) counter |
|
||||
| | `TEMP_HIST_R4` | Temperature histogram range 4 (43-50°C) counter |
|
||||
| | `TEMP_HIST_R5` | Temperature histogram range 5 (50.5-57.5°C) counter |
|
||||
| | `TEMP_HIST_R6` | Temperature histogram range 6 (58-65°C) counter |
|
||||
| | `TEMP_HIST_R7` | Temperature histogram range 7 (65.5-72.5°C) counter |
|
||||
| | `TEMP_HIST_R8` | Temperature histogram range 8 (73-80°C) counter |
|
||||
| | `TEMP_HIST_R9` | Temperature histogram range 9 (80.5-87.5°C) counter |
|
||||
| | `TEMP_HIST_R10` | Temperature histogram range 10 (88-95°C) counter |
|
||||
| | `TEMP_HIST_R11` | Temperature histogram range 11 (greater then 95°C) counter |
|
||||
| `tpvp_throttle_counter` | `PVP_THROTTLE_64` | Counter indicating the number of times the core x was throttled in the last 64 cycles window |
|
||||
| | `PVP_THROTTLE_1024` | Counter indicating the number of times the core x was throttled in the last 1024 cycles window |
|
||||
| `tpvp_level_res` | `PVP_LEVEL_RES_128_L0` | Counter indicating the percentage of residency during the last 2 ms measurement for level 0 of this type of CPU instruction |
|
||||
| | `PVP_LEVEL_RES_128_L1` | Counter indicating the percentage of residency during the last 2 ms measurement for level 1 of this type of CPU instruction |
|
||||
| | `PVP_LEVEL_RES_128_L2` | Counter indicating the percentage of residency during the last 2 ms measurement for level 2 of this type of CPU instruction |
|
||||
| | `PVP_LEVEL_RES_128_L3` | Counter indicating the percentage of residency during the last 2 ms measurement for level 3 of this type of CPU instruction |
|
||||
| | `PVP_LEVEL_RES_256_L0` | Counter indicating the percentage of residency during the last 2 ms measurement for level 0 of AVX256 CPU instructions |
|
||||
| | `PVP_LEVEL_RES_256_L1` | Counter indicating the percentage of residency during the last 2 ms measurement for level 1 of AVX256 CPU instructions |
|
||||
| | `PVP_LEVEL_RES_256_L2` | Counter indicating the percentage of residency during the last 2 ms measurement for level 2 of AVX256 CPU instructions |
|
||||
| | `PVP_LEVEL_RES_256_L3` | Counter indicating the percentage of residency during the last 2 ms measurement for level 3 of AVX256 CPU instructions |
|
||||
| | `PVP_LEVEL_RES_512_L0` | Counter indicating the percentage of residency during the last 2 ms measurement for level 0 of AVX512 CPU instructions |
|
||||
| | `PVP_LEVEL_RES_512_L1` | Counter indicating the percentage of residency during the last 2 ms measurement for level 1 of AVX512 CPU instructions |
|
||||
| | `PVP_LEVEL_RES_512_L2` | Counter indicating the percentage of residency during the last 2 ms measurement for level 2 of AVX512 CPU instructions |
|
||||
| | `PVP_LEVEL_RES_512_L3` | Counter indicating the percentage of residency during the last 2 ms measurement for level 3 of AVX512 CPU instructions |
|
||||
| | `PVP_LEVEL_RES_TMUL_L0` | Counter indicating the percentage of residency during the last 2 ms measurement for level 0 of TMUL CPU instructions |
|
||||
| | `PVP_LEVEL_RES_TMUL_L1` | Counter indicating the percentage of residency during the last 2 ms measurement for level 1 of TMUL CPU instructions |
|
||||
| | `PVP_LEVEL_RES_TMUL_L2` | Counter indicating the percentage of residency during the last 2 ms measurement for level 2 of TMUL CPU instructions |
|
||||
| | `PVP_LEVEL_RES_TMUL_L3` | Counter indicating the percentage of residency during the last 2 ms measurement for level 3 of TMUL CPU instructions |
|
||||
| `ttsc_timer` | `TSC_TIMER` | OOBMSM TSC (Time Stamp Counter) value |
|
||||
| `tnum_en_cha` | `NUM_EN_CHA` | Number of enabled CHAs |
|
||||
| `trmid_usage_counter` | `RMID0_RDT_CMT` | CHA x RMID 0 LLC cache line usage counter (per CHA) |
|
||||
| | `RMID1_RDT_CMT` | CHA x RMID 1 LLC cache line usage counter (per CHA) |
|
||||
| | `RMID2_RDT_CMT` | CHA x RMID 2 LLC cache line usage counter (per CHA) |
|
||||
| | `RMID3_RDT_CMT` | CHA x RMID 3 LLC cache line usage counter (per CHA) |
|
||||
| | `RMID4_RDT_CMT` | CHA x RMID 4 LLC cache line usage counter (per CHA) |
|
||||
| | `RMID5_RDT_CMT` | CHA x RMID 5 LLC cache line usage counter (per CHA) |
|
||||
| | `RMID6_RDT_CMT` | CHA x RMID 6 LLC cache line usage counter (per CHA) |
|
||||
| | `RMID7_RDT_CMT` | CHA x RMID 7 LLC cache line usage counter (per CHA) |
|
||||
| | `RMID0_RDT_MBM_TOTAL` | CHA x RMID 0 total memory transactions counter (per CHA) |
|
||||
| | `RMID1_RDT_MBM_TOTAL` | CHA x RMID 1 total memory transactions counter (per CHA) |
|
||||
| | `RMID2_RDT_MBM_TOTAL` | CHA x RMID 2 total memory transactions counter (per CHA) |
|
||||
| | `RMID3_RDT_MBM_TOTAL` | CHA x RMID 3 total memory transactions counter (per CHA) |
|
||||
| | `RMID4_RDT_MBM_TOTAL` | CHA x RMID 4 total memory transactions counter (per CHA) |
|
||||
| | `RMID5_RDT_MBM_TOTAL` | CHA x RMID 5 total memory transactions counter (per CHA) |
|
||||
| | `RMID6_RDT_MBM_TOTAL` | CHA x RMID 6 total memory transactions counter (per CHA) |
|
||||
| | `RMID7_RDT_MBM_TOTAL` | CHA x RMID 7 total memory transactions counter (per CHA) |
|
||||
| | `RMID0_RDT_MBM_LOCAL` | CHA x RMID 0 local memory transactions counter (per CHA) |
|
||||
| | `RMID1_RDT_MBM_LOCAL` | CHA x RMID 1 local memory transactions counter (per CHA) |
|
||||
| | `RMID2_RDT_MBM_LOCAL` | CHA x RMID 2 local memory transactions counter (per CHA) |
|
||||
| | `RMID3_RDT_MBM_LOCAL` | CHA x RMID 3 local memory transactions counter (per CHA) |
|
||||
| | `RMID4_RDT_MBM_LOCAL` | CHA x RMID 4 local memory transactions counter (per CHA) |
|
||||
| | `RMID5_RDT_MBM_LOCAL` | CHA x RMID 5 local memory transactions counter (per CHA) |
|
||||
| | `RMID6_RDT_MBM_LOCAL` | CHA x RMID 6 local memory transactions counter (per CHA) |
|
||||
| | `RMID7_RDT_MBM_LOCAL` | CHA x RMID 7 local memory transactions counter (per CHA) |
|
||||
| `ttw_unit` | `TW` | Time window. Valid TW range is 0 to 17. The unit is calculated as `2.3 * 2^TW` ms (e.g. `2.3 * 2^17` ms = ~302 seconds). |
|
||||
| `tcore_stress_level` | `STRESS_LEVEL` | Accumulating counter indicating relative stress level for a core (per core) |
|
||||
|
||||
### Example: C-State residency and temperature with a datatype metric filter
|
||||
|
||||
This configuration allows getting only a subset of metrics
|
||||
with the use of a datatype filter:
|
||||
|
||||
```toml
|
||||
[[inputs.intel_pmt]]
|
||||
spec = "/home/telegraf/Intel-PMT/xml/pmt.xml"
|
||||
datatypes_enabled = ["tbandwidth_28b","ttemperature"]
|
||||
```
|
||||
|
||||
### Example: C-State residency and temperature with a sample metric filter
|
||||
|
||||
This configuration allows getting only a subset of metrics
|
||||
with the use of a sample filter:
|
||||
|
||||
```toml
|
||||
[[inputs.intel_pmt]]
|
||||
spec = "/home/telegraf/Intel-PMT/xml/pmt.xml"
|
||||
samples_enabled = ["C0Residency","C1Residency", "Cx_TEMP"]
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
All metrics have the following tags:
|
||||
|
||||
- `guid` (unique id of an Intel PMT space).
|
||||
- `numa_node` (NUMA node the sample is collected from).
|
||||
- `pci_bdf` (PCI Bus:Device.Function (BDF) the sample is collected from).
|
||||
- `sample_name` (name of the gathered sample).
|
||||
- `sample_group` (name of a group to which the sample belongs).
|
||||
- `datatype_idref` (datatype to which the sample belongs).
|
||||
|
||||
`sample_name` prefixed in XMLs with `Cx_` where `x`
|
||||
is the core number also have the following tag:
|
||||
|
||||
- `core` (core to which the metric relates).
|
||||
|
||||
`sample_name` prefixed in XMLs with `CHAx_` where `x`
|
||||
is the CHA number also have the following tag:
|
||||
|
||||
- `cha` (Caching and Home Agent to which the metric relates).
|
||||
|
||||
## Example Output
|
||||
|
||||
Example output with `tpvp_throttle_counter` as a datatype metric filter:
|
||||
|
||||
```text
|
||||
intel_pmt,core=0,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C0_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1886465i 1693766334000000000
|
||||
intel_pmt,core=1,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C1_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=2,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C2_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=3,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C3_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=4,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C4_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1357578i 1693766334000000000
|
||||
intel_pmt,core=5,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C5_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=6,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C6_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2024801i 1693766334000000000
|
||||
intel_pmt,core=7,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C7_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=8,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C8_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1390741i 1693766334000000000
|
||||
intel_pmt,core=9,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C9_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=10,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C10_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1536483i 1693766334000000000
|
||||
intel_pmt,core=11,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C11_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=12,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C12_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=13,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C13_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=14,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C14_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1604964i 1693766334000000000
|
||||
intel_pmt,core=15,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C15_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1168673i 1693766334000000000
|
||||
intel_pmt,core=16,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C16_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=17,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C17_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=18,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C18_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1276588i 1693766334000000000
|
||||
intel_pmt,core=19,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C19_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1139005i 1693766334000000000
|
||||
intel_pmt,core=20,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C20_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=21,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C21_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=22,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C22_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=970698i 1693766334000000000
|
||||
intel_pmt,core=23,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C23_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=24,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C24_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=25,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C25_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1178462i 1693766334000000000
|
||||
intel_pmt,core=26,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C26_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=27,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C27_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2093384i 1693766334000000000
|
||||
intel_pmt,core=28,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C28_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=29,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C29_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=30,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C30_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=31,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C31_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=32,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C32_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2825174i 1693766334000000000
|
||||
intel_pmt,core=33,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C33_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2592279i 1693766334000000000
|
||||
intel_pmt,core=34,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C34_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=35,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C35_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=36,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C36_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1960662i 1693766334000000000
|
||||
intel_pmt,core=37,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C37_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1821914i 1693766334000000000
|
||||
intel_pmt,core=38,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C38_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=39,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C39_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=40,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C40_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=41,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C41_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2654651i 1693766334000000000
|
||||
intel_pmt,core=42,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C42_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2230984i 1693766334000000000
|
||||
intel_pmt,core=43,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C43_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=44,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C44_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=45,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C45_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=46,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C46_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2325520i 1693766334000000000
|
||||
intel_pmt,core=47,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C47_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=48,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C48_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=49,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C49_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=50,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C50_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=51,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C51_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=52,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C52_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1468880i 1693766334000000000
|
||||
intel_pmt,core=53,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C53_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2151919i 1693766334000000000
|
||||
intel_pmt,core=54,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C54_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=55,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C55_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2065994i 1693766334000000000
|
||||
intel_pmt,core=56,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C56_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=57,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C57_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=58,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C58_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1553691i 1693766334000000000
|
||||
intel_pmt,core=59,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C59_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1624177i 1693766334000000000
|
||||
intel_pmt,core=60,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C60_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=61,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C61_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=62,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C62_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=63,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C63_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
|
||||
intel_pmt,core=0,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C0_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=12977949i 1693766334000000000
|
||||
intel_pmt,core=1,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C1_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=2,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C2_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=3,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C3_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=4,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C4_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=7180524i 1693766334000000000
|
||||
intel_pmt,core=5,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C5_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=6,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C6_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=8667263i 1693766334000000000
|
||||
intel_pmt,core=7,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C7_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=8,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C8_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=5945851i 1693766334000000000
|
||||
intel_pmt,core=9,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C9_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=10,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C10_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=6669829i 1693766334000000000
|
||||
intel_pmt,core=11,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C11_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=12,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C12_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=13,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C13_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=14,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C14_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=6579832i 1693766334000000000
|
||||
intel_pmt,core=15,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C15_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=6101856i 1693766334000000000
|
||||
intel_pmt,core=16,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C16_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=17,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C17_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=18,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C18_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=7796183i 1693766334000000000
|
||||
intel_pmt,core=19,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C19_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=6849098i 1693766334000000000
|
||||
intel_pmt,core=20,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C20_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=21,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C21_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=22,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C22_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=12378942i 1693766334000000000
|
||||
intel_pmt,core=23,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C23_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=24,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C24_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=25,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C25_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=8299231i 1693766334000000000
|
||||
intel_pmt,core=26,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C26_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=27,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C27_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=7986390i 1693766334000000000
|
||||
intel_pmt,core=28,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C28_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=29,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C29_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=30,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C30_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=31,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C31_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=32,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C32_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=9876325i 1693766334000000000
|
||||
intel_pmt,core=33,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C33_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=8547471i 1693766334000000000
|
||||
intel_pmt,core=34,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C34_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=35,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C35_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=36,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C36_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=9231744i 1693766334000000000
|
||||
intel_pmt,core=37,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C37_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=8133031i 1693766334000000000
|
||||
intel_pmt,core=38,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C38_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=39,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C39_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=40,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C40_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=41,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C41_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=6136417i 1693766334000000000
|
||||
intel_pmt,core=42,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C42_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=6091019i 1693766334000000000
|
||||
intel_pmt,core=43,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C43_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=44,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C44_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=45,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C45_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=46,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C46_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=5804639i 1693766334000000000
|
||||
intel_pmt,core=47,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C47_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=48,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C48_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=49,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C49_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=50,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C50_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=51,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C51_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=52,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C52_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=5738491i 1693766334000000000
|
||||
intel_pmt,core=53,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C53_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=6058504i 1693766334000000000
|
||||
intel_pmt,core=54,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C54_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=55,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C55_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=5987093i 1693766334000000000
|
||||
intel_pmt,core=56,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C56_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=57,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C57_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=58,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C58_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=10384909i 1693766334000000000
|
||||
intel_pmt,core=59,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C59_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=7305786i 1693766334000000000
|
||||
intel_pmt,core=60,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C60_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=61,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C61_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=62,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C62_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
intel_pmt,core=63,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C63_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
|
||||
```
|
179
plugins/inputs/intel_pmt/filtering.go
Normal file
179
plugins/inputs/intel_pmt/filtering.go
Normal file
|
@ -0,0 +1,179 @@
|
|||
//go:build linux && amd64
|
||||
|
||||
package intel_pmt
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"regexp"
|
||||
"slices"
|
||||
)
|
||||
|
||||
var metricPatternRegex = regexp.MustCompile(`(?P<class>(C|CHA))\d+_(?P<var>[A-Z0-9_]+)$`)
|
||||
|
||||
// verifyNoEmpty checks if all pmt XMLs are not empty.
|
||||
//
|
||||
// Data for different GUIDs can be empty
|
||||
// but data for at least one GUID cannot be empty.
|
||||
//
|
||||
// Returns:
|
||||
// - nil if at least one pair of XMLs for GUID is not empty.
|
||||
// - an error if all XMLs are empty.
|
||||
func (p *IntelPMT) verifyNoEmpty() error {
|
||||
emptyAggInterface := true
|
||||
for guid := range p.pmtTelemetryFiles {
|
||||
if len(p.pmtAggregatorInterface[guid].AggregatorSamples.AggregatorSample) != 0 {
|
||||
emptyAggInterface = false
|
||||
break
|
||||
}
|
||||
}
|
||||
if emptyAggInterface {
|
||||
return errors.New("all aggregator interface XMLs are empty")
|
||||
}
|
||||
emptyAgg := true
|
||||
for guid := range p.pmtTelemetryFiles {
|
||||
if len(p.pmtAggregator[guid].SampleGroup) != 0 {
|
||||
emptyAgg = false
|
||||
break
|
||||
}
|
||||
}
|
||||
if emptyAgg {
|
||||
return errors.New("all aggregator XMLs are empty")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// filterAggregatorByDatatype filters Aggregator XML by provided datatypes.
|
||||
//
|
||||
// Every sample group in aggregator XML consists of several samples.
|
||||
// Every sample in the group has a datatype assigned.
|
||||
// This function filters the samples based on their datatype.
|
||||
//
|
||||
// Parameters:
|
||||
//
|
||||
// datatypes: string slice of datatypes to include in filtered XML.
|
||||
func (a *aggregator) filterAggregatorByDatatype(datatypes []string) {
|
||||
var newSampleGroup []sampleGroup
|
||||
for _, group := range a.SampleGroup {
|
||||
var tmpAgg []sample
|
||||
for _, aggSample := range group.Sample {
|
||||
if slices.Contains(datatypes, aggSample.DatatypeIDRef) {
|
||||
tmpAgg = append(tmpAgg, aggSample)
|
||||
}
|
||||
}
|
||||
if len(tmpAgg) > 0 {
|
||||
// groupSample can have samples with different datatypeIDRef inside
|
||||
// so new groupSample needs to be created
|
||||
// containing all needed information and filtered samples only.
|
||||
newGroup := sampleGroup{}
|
||||
newGroup.SampleID = group.SampleID
|
||||
newGroup.Sample = tmpAgg
|
||||
newSampleGroup = append(newSampleGroup, newGroup)
|
||||
}
|
||||
}
|
||||
a.SampleGroup = newSampleGroup
|
||||
}
|
||||
|
||||
// filterAggregatorBySampleName filters Aggregator XML by provided sample names.
|
||||
//
|
||||
// Every sample has a name specified in the XML.
|
||||
// This function filters the samples based on their names.
|
||||
// The match can be exact or can be based on regex match.
|
||||
//
|
||||
// Parameters:
|
||||
//
|
||||
// sampleNames: string slice of sample names to include in filtered XML.
|
||||
func (a *aggregator) filterAggregatorBySampleName(sampleNames []string) {
|
||||
var newSampleGroup []sampleGroup
|
||||
for _, group := range a.SampleGroup {
|
||||
var tmpAgg []sample
|
||||
for _, aggSample := range group.Sample {
|
||||
if shouldAddSample(aggSample, sampleNames) {
|
||||
tmpAgg = append(tmpAgg, aggSample)
|
||||
}
|
||||
}
|
||||
|
||||
if len(tmpAgg) > 0 {
|
||||
newGroup := sampleGroup{}
|
||||
newGroup.SampleID = group.SampleID
|
||||
newGroup.Sample = tmpAgg
|
||||
newSampleGroup = append(newSampleGroup, newGroup)
|
||||
}
|
||||
}
|
||||
a.SampleGroup = newSampleGroup
|
||||
}
|
||||
|
||||
// shouldAddSample is a helper function for filterAggregatorBySampleName
|
||||
// that checks if the sample should be added to the sample group.
|
||||
func shouldAddSample(s sample, sampleNames []string) bool {
|
||||
matches := metricPatternRegex.FindStringSubmatch(s.SampleName)
|
||||
for _, v := range sampleNames {
|
||||
if s.SampleName == v {
|
||||
return true
|
||||
}
|
||||
if len(matches) == 4 {
|
||||
if matches[3] == v {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// filterAggInterfaceByDatatype filter aggregator interface XML by provided datatypes.
|
||||
//
|
||||
// Aggregator interface XML contains many aggregator samples inside, each with datatype assigned.
|
||||
// This function filters aggregator samples based on their datatype.
|
||||
//
|
||||
// Parameters:
|
||||
//
|
||||
// datatypes: string slice of datatypes to include in filtered XML.
|
||||
// dtMetricsFound: a map of found datatypes for all GUIDs.
|
||||
func (a *aggregatorInterface) filterAggInterfaceByDatatype(datatypes []string, dtMetricsFound map[string]bool) {
|
||||
newAggSample := aggregatorSamples{}
|
||||
for _, s := range a.AggregatorSamples.AggregatorSample {
|
||||
if slices.Contains(datatypes, s.DatatypeIDRef) {
|
||||
dtMetricsFound[s.DatatypeIDRef] = true
|
||||
newAggSample.AggregatorSample = append(newAggSample.AggregatorSample, s)
|
||||
}
|
||||
}
|
||||
a.AggregatorSamples = newAggSample
|
||||
}
|
||||
|
||||
// filterAggInterfaceBySampleName filters aggregator interface XML by sample names.
|
||||
//
|
||||
// This function filters aggregator samples based on the provided sampleNames.
|
||||
// When the name for the sample is unique the match is exact.
|
||||
// When the name is per resource (i.e. Cx_) the match is regex based.
|
||||
//
|
||||
// Parameters:
|
||||
//
|
||||
// sampleNames: string slice of sample names to include in filtered XML.
|
||||
// sMetricsFound: a map of found metric names for all GUIDs.
|
||||
func (a *aggregatorInterface) filterAggInterfaceBySampleName(sampleNames []string, sMetricsFound map[string]bool) {
|
||||
newAggSample := aggregatorSamples{}
|
||||
for _, s := range a.AggregatorSamples.AggregatorSample {
|
||||
if shouldAddAggregatorSample(s, sampleNames, sMetricsFound) {
|
||||
newAggSample.AggregatorSample = append(newAggSample.AggregatorSample, s)
|
||||
}
|
||||
}
|
||||
a.AggregatorSamples = newAggSample
|
||||
}
|
||||
|
||||
// shouldAddAggregatorSample is a helper function for filterAggInterfaceBySampleName
|
||||
// that checks if if the sample should be added to the aggregator samples.
|
||||
func shouldAddAggregatorSample(s aggregatorSample, sampleNames []string, sMetricsFound map[string]bool) bool {
|
||||
matches := metricPatternRegex.FindStringSubmatch(s.SampleName)
|
||||
for _, userMetricInput := range sampleNames {
|
||||
if s.SampleName == userMetricInput {
|
||||
sMetricsFound[userMetricInput] = true
|
||||
return true
|
||||
}
|
||||
if len(matches) == 4 {
|
||||
if matches[3] == userMetricInput {
|
||||
sMetricsFound[userMetricInput] = true
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
655
plugins/inputs/intel_pmt/filtering_test.go
Normal file
655
plugins/inputs/intel_pmt/filtering_test.go
Normal file
|
@ -0,0 +1,655 @@
|
|||
//go:build linux && amd64
|
||||
|
||||
package intel_pmt
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
)
|
||||
|
||||
func TestFilterAggregatorByDatatype(t *testing.T) {
|
||||
t.Run("Filter aggregator, 1 sample group, 2 samples with different DataTypes", func(t *testing.T) {
|
||||
agg := aggregator{
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 4,
|
||||
Lsb: 4,
|
||||
SampleID: "test-sample-ref",
|
||||
},
|
||||
{
|
||||
DatatypeIDRef: "missing",
|
||||
Msb: 0,
|
||||
Lsb: 0,
|
||||
SampleID: "missing",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
p := IntelPMT{
|
||||
DatatypeFilter: []string{"test-datatype"},
|
||||
}
|
||||
expected := aggregator{
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 4,
|
||||
Lsb: 4,
|
||||
SampleID: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
agg.filterAggregatorByDatatype(p.DatatypeFilter)
|
||||
require.Equal(t, expected, agg)
|
||||
})
|
||||
|
||||
t.Run("Filter Aggregator, 2 sample groups, only 1 sample group has expected datatype", func(t *testing.T) {
|
||||
agg := aggregator{
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 4,
|
||||
Lsb: 4,
|
||||
SampleID: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
SampleID: uint64(2),
|
||||
Sample: []sample{
|
||||
{
|
||||
DatatypeIDRef: "missing",
|
||||
Msb: 0,
|
||||
Lsb: 0,
|
||||
SampleID: "missing",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
p := IntelPMT{
|
||||
DatatypeFilter: []string{"test-datatype"},
|
||||
}
|
||||
expected := aggregator{
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 4,
|
||||
Lsb: 4,
|
||||
SampleID: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
agg.filterAggregatorByDatatype(p.DatatypeFilter)
|
||||
require.Equal(t, expected, agg)
|
||||
})
|
||||
}
|
||||
|
||||
func TestFilterAggregatorInterfaceByDatatype(t *testing.T) {
|
||||
t.Run("Filter agg interface, 2 Agg samples, only 1 should remain", func(t *testing.T) {
|
||||
aggInterface := aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
SampleName: "missing",
|
||||
SampleGroup: "missing",
|
||||
DatatypeIDRef: "missing",
|
||||
TransformREF: "missing",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "missing",
|
||||
SampleIDREF: "missing",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
expected := aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
p := IntelPMT{
|
||||
DatatypeFilter: []string{"test-datatype"},
|
||||
Log: testutil.Logger{},
|
||||
}
|
||||
aggInterface.filterAggInterfaceByDatatype(p.DatatypeFilter, make(map[string]bool))
|
||||
require.Equal(t, expected, aggInterface)
|
||||
})
|
||||
}
|
||||
|
||||
func TestFilterAggregatorBySampleName(t *testing.T) {
|
||||
t.Run("Filter aggregator, 2 sample names, with the same datatype, 1 sample name matches exactly", func(t *testing.T) {
|
||||
agg := aggregator{
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
SampleName: "exists",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 4,
|
||||
Lsb: 4,
|
||||
SampleID: "test-sample-ref",
|
||||
},
|
||||
{
|
||||
SampleName: "missing",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 0,
|
||||
Lsb: 0,
|
||||
SampleID: "missing",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
p := IntelPMT{
|
||||
SampleFilter: []string{"exists"},
|
||||
}
|
||||
expected := aggregator{
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
SampleName: "exists",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 4,
|
||||
Lsb: 4,
|
||||
SampleID: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
agg.filterAggregatorBySampleName(p.SampleFilter)
|
||||
require.Equal(t, expected, agg)
|
||||
})
|
||||
|
||||
t.Run("Filter aggregator, 2 sample names, with the same datatype, 1 sample name matches by regex", func(t *testing.T) {
|
||||
agg := aggregator{
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
SampleName: "C61_TEMP",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 4,
|
||||
Lsb: 4,
|
||||
SampleID: "test-sample-ref",
|
||||
},
|
||||
{
|
||||
SampleName: "C61_TEMP_test",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 0,
|
||||
Lsb: 0,
|
||||
SampleID: "missing",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
p := IntelPMT{
|
||||
SampleFilter: []string{"TEMP"},
|
||||
}
|
||||
expected := aggregator{
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
SampleName: "C61_TEMP",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 4,
|
||||
Lsb: 4,
|
||||
SampleID: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
agg.filterAggregatorBySampleName(p.SampleFilter)
|
||||
require.Equal(t, expected, agg)
|
||||
})
|
||||
}
|
||||
|
||||
func TestFilterAggregatorInterfaceBySampleName(t *testing.T) {
|
||||
t.Run("Filter agg interface, 2 Agg samples, 1 sample name matches exactly", func(t *testing.T) {
|
||||
aggInterface := aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "C36_PVP_LEVEL_RES_128_L1",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
SampleName: "missing",
|
||||
SampleGroup: "missing",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "missing",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "missing",
|
||||
SampleIDREF: "missing",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
expected := aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "C36_PVP_LEVEL_RES_128_L1",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
p := IntelPMT{
|
||||
SampleFilter: []string{"PVP_LEVEL_RES_128_L1"},
|
||||
Log: testutil.Logger{},
|
||||
}
|
||||
aggInterface.filterAggInterfaceBySampleName(p.SampleFilter, make(map[string]bool))
|
||||
require.Equal(t, expected, aggInterface)
|
||||
})
|
||||
|
||||
t.Run("Filter agg interface, 2 Agg samples, 1 sample name matches by regex", func(t *testing.T) {
|
||||
aggInterface := aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
SampleName: "missing",
|
||||
SampleGroup: "missing",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "missing",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "missing",
|
||||
SampleIDREF: "missing",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
expected := aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
p := IntelPMT{
|
||||
SampleFilter: []string{"test-sample"},
|
||||
Log: testutil.Logger{},
|
||||
}
|
||||
aggInterface.filterAggInterfaceBySampleName(p.SampleFilter, make(map[string]bool))
|
||||
require.Equal(t, expected, aggInterface)
|
||||
})
|
||||
}
|
||||
|
||||
func TestVerifyNoEmpty(t *testing.T) {
|
||||
t.Run("Correct XMLs, no filtering by user", func(t *testing.T) {
|
||||
p := &IntelPMT{
|
||||
pmtAggregator: map[string]aggregator{
|
||||
"test-guid": {
|
||||
SampleGroup: []sampleGroup{{}},
|
||||
},
|
||||
},
|
||||
pmtAggregatorInterface: map[string]aggregatorInterface{
|
||||
"test-guid": {
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{{}},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
p.pmtTelemetryFiles = map[string]pmtFileInfo{
|
||||
"test-guid": []fileInfo{{}},
|
||||
}
|
||||
require.NoError(t, p.verifyNoEmpty())
|
||||
})
|
||||
|
||||
t.Run("Incorrect XMLs, filtering by datatype that doesn't exist", func(t *testing.T) {
|
||||
p := &IntelPMT{
|
||||
pmtAggregator: map[string]aggregator{
|
||||
"test-guid": {},
|
||||
},
|
||||
pmtAggregatorInterface: map[string]aggregatorInterface{
|
||||
"test-guid": {},
|
||||
},
|
||||
DatatypeFilter: []string{"doesn't-exist"},
|
||||
Log: testutil.Logger{},
|
||||
pmtTelemetryFiles: map[string]pmtFileInfo{"test-guid": []fileInfo{{}}},
|
||||
}
|
||||
aggInterface := aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
aggInterface.filterAggInterfaceByDatatype(p.DatatypeFilter, make(map[string]bool))
|
||||
p.pmtAggregatorInterface["test-guid"] = aggInterface
|
||||
require.ErrorContains(t, p.verifyNoEmpty(), "all aggregator interface XMLs are empty")
|
||||
})
|
||||
|
||||
t.Run("Incorrect XMLs, user provided sample names that don't exist", func(t *testing.T) {
|
||||
p := &IntelPMT{
|
||||
pmtAggregatorInterface: map[string]aggregatorInterface{
|
||||
"test-guid": {},
|
||||
},
|
||||
SampleFilter: []string{"doesn't-exist"},
|
||||
Log: testutil.Logger{},
|
||||
pmtTelemetryFiles: map[string]pmtFileInfo{"test-guid": []fileInfo{{}}},
|
||||
}
|
||||
|
||||
aggInterface := aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
aggInterface.filterAggInterfaceBySampleName(p.SampleFilter, make(map[string]bool))
|
||||
p.pmtAggregatorInterface["test-guid"] = aggInterface
|
||||
require.ErrorContains(t, p.verifyNoEmpty(), "XMLs are empty")
|
||||
})
|
||||
t.Run("Correct XMLs, user provided correct sample names", func(t *testing.T) {
|
||||
p := &IntelPMT{
|
||||
pmtAggregator: map[string]aggregator{
|
||||
"test-guid": {},
|
||||
},
|
||||
pmtAggregatorInterface: map[string]aggregatorInterface{
|
||||
"test-guid": {},
|
||||
},
|
||||
SampleFilter: []string{"test-sample"},
|
||||
pmtTelemetryFiles: map[string]pmtFileInfo{"test-guid": []fileInfo{{}}},
|
||||
}
|
||||
agg := aggregator{
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 4,
|
||||
Lsb: 4,
|
||||
SampleID: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
aggInterface := aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
agg.filterAggregatorBySampleName(p.SampleFilter)
|
||||
aggInterface.filterAggInterfaceBySampleName(p.SampleFilter, make(map[string]bool))
|
||||
p.pmtAggregator["test-guid"] = agg
|
||||
p.pmtAggregatorInterface["test-guid"] = aggInterface
|
||||
require.NoError(t, p.verifyNoEmpty())
|
||||
})
|
||||
|
||||
t.Run("Correct XMLs, user provided correct datatype names", func(t *testing.T) {
|
||||
p := &IntelPMT{
|
||||
pmtAggregator: map[string]aggregator{
|
||||
"test-guid": {},
|
||||
},
|
||||
pmtAggregatorInterface: map[string]aggregatorInterface{
|
||||
"test-guid": {},
|
||||
},
|
||||
DatatypeFilter: []string{"test-datatype"},
|
||||
pmtTelemetryFiles: map[string]pmtFileInfo{"test-guid": []fileInfo{{}}},
|
||||
}
|
||||
agg := aggregator{
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 4,
|
||||
Lsb: 4,
|
||||
SampleID: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
aggInterface := aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
agg.filterAggregatorByDatatype(p.DatatypeFilter)
|
||||
aggInterface.filterAggInterfaceByDatatype(p.DatatypeFilter, make(map[string]bool))
|
||||
p.pmtAggregator["test-guid"] = agg
|
||||
p.pmtAggregatorInterface["test-guid"] = aggInterface
|
||||
require.NoError(t, p.verifyNoEmpty())
|
||||
})
|
||||
|
||||
t.Run("Incorrect XMLs, no datatype metrics found in aggregator sample XML", func(t *testing.T) {
|
||||
p := &IntelPMT{
|
||||
pmtAggregator: map[string]aggregator{
|
||||
"test-guid": {},
|
||||
},
|
||||
pmtAggregatorInterface: map[string]aggregatorInterface{
|
||||
"test-guid": {},
|
||||
},
|
||||
DatatypeFilter: []string{"test-datatype"},
|
||||
pmtTelemetryFiles: map[string]pmtFileInfo{"test-guid": []fileInfo{{}}},
|
||||
}
|
||||
agg := aggregator{
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
// DatatypeIDREF is wrong
|
||||
DatatypeIDRef: "wrong",
|
||||
Msb: 4,
|
||||
Lsb: 4,
|
||||
SampleID: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
aggInterface := aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
agg.filterAggregatorByDatatype(p.DatatypeFilter)
|
||||
aggInterface.filterAggInterfaceByDatatype(p.DatatypeFilter, make(map[string]bool))
|
||||
p.pmtAggregator["test-guid"] = agg
|
||||
p.pmtAggregatorInterface["test-guid"] = aggInterface
|
||||
require.ErrorContains(t, p.verifyNoEmpty(), "all aggregator XMLs are empty")
|
||||
})
|
||||
}
|
405
plugins/inputs/intel_pmt/intel_pmt.go
Normal file
405
plugins/inputs/intel_pmt/intel_pmt.go
Normal file
|
@ -0,0 +1,405 @@
|
|||
//go:generate ../../../tools/readme_config_includer/generator
|
||||
//go:build linux && amd64
|
||||
|
||||
package intel_pmt
|
||||
|
||||
import (
|
||||
_ "embed"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"html"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/PaesslerAG/gval"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
)
|
||||
|
||||
//go:embed sample.conf
|
||||
var sampleConfig string
|
||||
|
||||
var hexToDecRegex = regexp.MustCompile(`0x[0-9a-fA-F]+`)
|
||||
|
||||
const (
|
||||
defaultPmtBasePath = "/sys/class/intel_pmt"
|
||||
pluginName = "intel_pmt"
|
||||
)
|
||||
|
||||
type IntelPMT struct {
|
||||
PmtSpec string `toml:"spec"`
|
||||
DatatypeFilter []string `toml:"datatypes_enabled"`
|
||||
SampleFilter []string `toml:"samples_enabled"`
|
||||
Log telegraf.Logger `toml:"-"`
|
||||
|
||||
pmtBasePath string
|
||||
reader sourceReader
|
||||
pmtTelemetryFiles map[string]pmtFileInfo
|
||||
pmtMetadata *pmt
|
||||
pmtAggregator map[string]aggregator
|
||||
pmtAggregatorInterface map[string]aggregatorInterface
|
||||
pmtTransformations map[string]map[string]transformation
|
||||
}
|
||||
|
||||
type pmtFileInfo []fileInfo
|
||||
|
||||
type fileInfo struct {
|
||||
path string
|
||||
numaNode string
|
||||
pciBdf string // PCI Bus:Device.Function (BDF)
|
||||
}
|
||||
|
||||
func (*IntelPMT) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
func (p *IntelPMT) Init() error {
|
||||
err := p.checkPmtSpec()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = p.explorePmtInSysfs()
|
||||
if err != nil {
|
||||
return fmt.Errorf("error while exploring pmt sysfs: %w", err)
|
||||
}
|
||||
|
||||
return p.parseXMLs()
|
||||
}
|
||||
|
||||
func (p *IntelPMT) Gather(acc telegraf.Accumulator) error {
|
||||
var wg sync.WaitGroup
|
||||
var hasError atomic.Bool
|
||||
for guid := range p.pmtTelemetryFiles {
|
||||
wg.Add(1)
|
||||
go func(guid string, fileInfo []fileInfo) {
|
||||
defer wg.Done()
|
||||
for _, info := range fileInfo {
|
||||
data, err := os.ReadFile(info.path)
|
||||
if err != nil {
|
||||
hasError.Store(true)
|
||||
acc.AddError(fmt.Errorf("gathering metrics failed: %w", err))
|
||||
return
|
||||
}
|
||||
|
||||
err = p.aggregateSamples(acc, guid, data, info.numaNode, info.pciBdf)
|
||||
if err != nil {
|
||||
hasError.Store(true)
|
||||
acc.AddError(fmt.Errorf("gathering metrics failed: %w", err))
|
||||
return
|
||||
}
|
||||
}
|
||||
}(guid, p.pmtTelemetryFiles[guid])
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
if hasError.Load() {
|
||||
return errors.New("error(s) occurred while gathering metrics")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// checkPmtSpec checks if provided PmtSpec is correct and readable.
|
||||
//
|
||||
// PmtSpec is expected to be an absolute filepath.
|
||||
//
|
||||
// Returns:
|
||||
//
|
||||
// error - error if PmtSpec is invalid, not readable, or not absolute.
|
||||
func (p *IntelPMT) checkPmtSpec() error {
|
||||
if p.PmtSpec == "" {
|
||||
return errors.New("pmt spec is empty")
|
||||
}
|
||||
|
||||
if !isFileReadable(p.PmtSpec) {
|
||||
return fmt.Errorf("provided pmt spec is not readable %q", p.PmtSpec)
|
||||
}
|
||||
|
||||
lastSlash := strings.LastIndex(p.PmtSpec, "/")
|
||||
// if PmtSpec contains no "/"
|
||||
if lastSlash == -1 {
|
||||
return errors.New("provided pmt spec is not an absolute path")
|
||||
}
|
||||
p.pmtBasePath = p.PmtSpec[:lastSlash]
|
||||
p.reader = fileReader{}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// explorePmtInSysfs finds necessary paths in pmt sysfs.
|
||||
//
|
||||
// This method finds "telem" files, used to retrieve telemetry values
|
||||
// and saves them under their corresponding GUID.
|
||||
// It also finds which NUMA node and PCI BDF the samples belong to.
|
||||
//
|
||||
// Returns:
|
||||
//
|
||||
// error - error if any of the operations failed.
|
||||
func (p *IntelPMT) explorePmtInSysfs() error {
|
||||
pmtDirectories, err := os.ReadDir(defaultPmtBasePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error reading pmt directory: %w", err)
|
||||
}
|
||||
p.pmtTelemetryFiles = make(map[string]pmtFileInfo)
|
||||
for _, dir := range pmtDirectories {
|
||||
if !strings.HasPrefix(dir.Name(), "telem") {
|
||||
continue
|
||||
}
|
||||
telemDirPath := filepath.Join(defaultPmtBasePath, dir.Name())
|
||||
symlinkInfo, err := os.Stat(telemDirPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error resolving symlink for directory %q: %w", telemDirPath, err)
|
||||
}
|
||||
if !symlinkInfo.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
pmtGUIDPath := filepath.Join(telemDirPath, "guid")
|
||||
rawGUID, err := os.ReadFile(pmtGUIDPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot read GUID: %w", err)
|
||||
}
|
||||
// cut the newline char
|
||||
tID := strings.TrimSpace(string(rawGUID))
|
||||
|
||||
telemPath := filepath.Join(telemDirPath, "telem")
|
||||
if !isFileReadable(telemPath) {
|
||||
p.Log.Warnf("telem file is not readable %q", telemPath)
|
||||
continue
|
||||
}
|
||||
|
||||
telemDevicePath := filepath.Join(telemDirPath, "device")
|
||||
telemDeviceSymlink, err := filepath.EvalSymlinks(telemDevicePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error while evaluating symlink %q: %w", telemDeviceSymlink, err)
|
||||
}
|
||||
|
||||
telemDevicePciBdf := filepath.Base(filepath.Join(telemDeviceSymlink, ".."))
|
||||
|
||||
numaNodePath := filepath.Join(telemDeviceSymlink, "..", "numa_node")
|
||||
|
||||
numaNode, err := os.ReadFile(numaNodePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error while reading numa_node file %q: %w", numaNodePath, err)
|
||||
}
|
||||
numaNodeString := strings.TrimSpace(string(numaNode))
|
||||
if numaNodeString == "" {
|
||||
return fmt.Errorf("numa_node file %q is empty", numaNodePath)
|
||||
}
|
||||
|
||||
fi := fileInfo{
|
||||
path: telemPath,
|
||||
numaNode: numaNodeString,
|
||||
pciBdf: telemDevicePciBdf,
|
||||
}
|
||||
p.pmtTelemetryFiles[tID] = append(p.pmtTelemetryFiles[tID], fi)
|
||||
}
|
||||
if len(p.pmtTelemetryFiles) == 0 {
|
||||
return errors.New("no telemetry sources found - current platform doesn't support PMT or proper permissions needed to read them")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func isFileReadable(path string) bool {
|
||||
if _, err := os.Stat(path); err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
file, err := os.Open(path)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
file.Close()
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// getSampleValues reads all sample values for all sample groups.
|
||||
//
|
||||
// This method reads all telemetry samples for given GUID from given data
|
||||
// and saves it in results map.
|
||||
//
|
||||
// Parameters:
|
||||
//
|
||||
// guid - GUID saying which Aggregator XML will be read.
|
||||
// data - data read from "telem" file.
|
||||
//
|
||||
// Returns:
|
||||
//
|
||||
// map[string]uint64 - results map with read data.
|
||||
// error - error if getting any of the values failed.
|
||||
func (p *IntelPMT) getSampleValues(guid string, data []byte) (map[string]uint64, error) {
|
||||
results := make(map[string]uint64)
|
||||
for _, group := range p.pmtAggregator[guid].SampleGroup {
|
||||
// Determine starting position of the Sample Group.
|
||||
// Each Sample Group occupies 8 bytes.
|
||||
offset := 8 * group.SampleID
|
||||
for _, sample := range group.Sample {
|
||||
var err error
|
||||
results[sample.SampleID], err = getTelemSample(sample, data, offset)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
}
|
||||
return results, nil
|
||||
}
|
||||
|
||||
// getTelemSample extracts a telemetry sample from a given buffer.
|
||||
//
|
||||
// This function uses offset as a starting position.
|
||||
// Then it uses LSB and MSB from sample to determine which bits
|
||||
// to read from the given buffer.
|
||||
//
|
||||
// Parameters:
|
||||
//
|
||||
// s - sample from Aggregator XML containing LSB and MSB info.
|
||||
// buf - the byte buffer containing the telemetry data.
|
||||
// offset - the starting position (in bytes) in the buffer.
|
||||
//
|
||||
// Returns:
|
||||
//
|
||||
// uint64 - the extracted sample as a 64-bit unsigned integer.
|
||||
// error - error if offset+8 exceeds the size of the buffer.
|
||||
func getTelemSample(s sample, buf []byte, offset uint64) (uint64, error) {
|
||||
if len(buf) < int(offset+8) {
|
||||
return 0, fmt.Errorf("error reading telemetry sample: insufficient bytes from offset %d in buffer of size %d", offset, len(buf))
|
||||
}
|
||||
data := binary.LittleEndian.Uint64(buf[offset : offset+8])
|
||||
|
||||
// Apply mask and shift right
|
||||
value := (data & s.mask) >> s.Lsb
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// aggregateSamples outputs transformed metrics to Telegraf.
|
||||
//
|
||||
// This method transforms low level samples
|
||||
// into high-level samples with appropriate transformation equation.
|
||||
// Then it creates fields and tags and adds them to Telegraf Accumulator.
|
||||
//
|
||||
// Parameters:
|
||||
//
|
||||
// guid - GUID saying which Aggregator Interface will be read.
|
||||
// data - contents of the "telem" file.
|
||||
// numaNode - which NUMA node this sample belongs to.
|
||||
// pciBdf - PCI Bus:Device.Function (BDF) this sample belongs to.
|
||||
// acc - Telegraf Accumulator.
|
||||
//
|
||||
// Returns:
|
||||
//
|
||||
// error - error if getting values has failed, if sample IDref is missing or if equation evaluation has failed.
|
||||
func (p *IntelPMT) aggregateSamples(acc telegraf.Accumulator, guid string, data []byte, numaNode, pciBdf string) error {
|
||||
results, err := p.getSampleValues(guid, data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, sample := range p.pmtAggregatorInterface[guid].AggregatorSamples.AggregatorSample {
|
||||
parameters := make(map[string]interface{})
|
||||
for _, input := range sample.TransformInputs.TransformInput {
|
||||
if _, ok := results[input.SampleIDREF]; !ok {
|
||||
return fmt.Errorf("sample with IDREF %q has not been found", input.SampleIDREF)
|
||||
}
|
||||
parameters[input.VarName] = results[input.SampleIDREF]
|
||||
}
|
||||
eq := transformEquation(p.pmtTransformations[guid][sample.TransformREF].Transform)
|
||||
res, err := eval(eq, parameters)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error during eval of sample %q: %w", sample.SampleName, err)
|
||||
}
|
||||
fields := map[string]interface{}{
|
||||
"value": res,
|
||||
}
|
||||
tags := map[string]string{
|
||||
"guid": guid,
|
||||
"numa_node": numaNode,
|
||||
"pci_bdf": pciBdf,
|
||||
"sample_name": sample.SampleName,
|
||||
"sample_group": sample.SampleGroup,
|
||||
"datatype_idref": sample.DatatypeIDRef,
|
||||
}
|
||||
if sample.core != "" {
|
||||
tags["core"] = sample.core
|
||||
}
|
||||
if sample.cha != "" {
|
||||
tags["cha"] = sample.cha
|
||||
}
|
||||
|
||||
acc.AddFields(pluginName, fields, tags)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// transformEquation changes the equation string to be ready for eval.
|
||||
//
|
||||
// This function removes "$" signs, which prefixes every parameter in equations.
|
||||
// Then escapes special characters from XML
|
||||
// like "<" into "<", "&" into "&" and ">" into ">"
|
||||
// so they can be used in evaluation.
|
||||
//
|
||||
// Parameters:
|
||||
//
|
||||
// eq - string which should be transformed.
|
||||
//
|
||||
// Returns:
|
||||
//
|
||||
// string - transformed string.
|
||||
func transformEquation(eq string) string {
|
||||
withoutDollar := strings.ReplaceAll(eq, "$", "")
|
||||
decoded := html.UnescapeString(withoutDollar)
|
||||
return decoded
|
||||
}
|
||||
|
||||
// eval calculates the value of given equation for given parameters.
|
||||
//
|
||||
// This function evaluates arbitrary equations with parameters.
|
||||
// It substitutes the parameters in the equation with their values
|
||||
// and calculates its value.
|
||||
// Example: equation "a + b", with params: a: 2, b: 3.
|
||||
// a and b will be substituted with their values so the equation becomes "2 + 3".
|
||||
// If any of the parameters are missing then the equation is invalid and returns an error.
|
||||
// Parameters:
|
||||
//
|
||||
// eq - equation which should be calculated.
|
||||
// params - parameters to substitute in the equation.
|
||||
//
|
||||
// Returns:
|
||||
//
|
||||
// interface - the value of calculation.
|
||||
// error - error if the equation is empty, if hex to dec conversion failed or if the equation is invalid.
|
||||
func eval(eq string, params map[string]interface{}) (interface{}, error) {
|
||||
if eq == "" {
|
||||
return nil, errors.New("no transformation equation found")
|
||||
}
|
||||
// gval doesn't support hexadecimals
|
||||
eq = hexToDecRegex.ReplaceAllStringFunc(eq, hexToDec)
|
||||
if eq == "" {
|
||||
return nil, errors.New("error during hex to decimal conversion")
|
||||
}
|
||||
result, err := gval.Evaluate(eq, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func hexToDec(hexStr string) string {
|
||||
dec, err := strconv.ParseInt(hexStr, 0, 64)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return strconv.FormatInt(dec, 10)
|
||||
}
|
||||
|
||||
func init() {
|
||||
inputs.Add(pluginName, func() telegraf.Input {
|
||||
return &IntelPMT{}
|
||||
})
|
||||
}
|
33
plugins/inputs/intel_pmt/intel_pmt_notamd64linux.go
Normal file
33
plugins/inputs/intel_pmt/intel_pmt_notamd64linux.go
Normal file
|
@ -0,0 +1,33 @@
|
|||
//go:generate ../../../tools/readme_config_includer/generator
|
||||
//go:build !linux || !amd64
|
||||
|
||||
package intel_pmt
|
||||
|
||||
import (
|
||||
_ "embed"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
)
|
||||
|
||||
//go:embed sample.conf
|
||||
var sampleConfig string
|
||||
|
||||
type IntelPMT struct {
|
||||
Log telegraf.Logger `toml:"-"`
|
||||
}
|
||||
|
||||
func (*IntelPMT) SampleConfig() string { return sampleConfig }
|
||||
|
||||
func (p *IntelPMT) Init() error {
|
||||
p.Log.Warn("Current platform is not supported")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (*IntelPMT) Gather(_ telegraf.Accumulator) error { return nil }
|
||||
|
||||
func init() {
|
||||
inputs.Add("intel_pmt", func() telegraf.Input {
|
||||
return &IntelPMT{}
|
||||
})
|
||||
}
|
592
plugins/inputs/intel_pmt/intel_pmt_test.go
Normal file
592
plugins/inputs/intel_pmt/intel_pmt_test.go
Normal file
|
@ -0,0 +1,592 @@
|
|||
//go:build linux && amd64
|
||||
|
||||
package intel_pmt
|
||||
|
||||
import (
|
||||
_ "embed"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
)
|
||||
|
||||
func createTempFile(t *testing.T, dir, pattern string, data []byte) (*os.File, os.FileInfo) {
|
||||
tempFile, err := os.CreateTemp(dir, pattern)
|
||||
if err != nil {
|
||||
t.Fatalf("error creating a temporary file %v: %v", tempFile.Name(), err)
|
||||
}
|
||||
_, err = tempFile.Write(data)
|
||||
if err != nil {
|
||||
t.Fatalf("error writing buffer to file %v: %v", tempFile.Name(), err)
|
||||
}
|
||||
fileInfo, err := tempFile.Stat()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to stat a temporary file %v: %v", tempFile.Name(), err)
|
||||
}
|
||||
|
||||
return tempFile, fileInfo
|
||||
}
|
||||
|
||||
func TestTransformEquation(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
name: "No changes",
|
||||
input: "abc",
|
||||
expected: "abc",
|
||||
},
|
||||
{
|
||||
name: "Remove $ sign",
|
||||
input: "a$b$c",
|
||||
expected: "abc",
|
||||
},
|
||||
{
|
||||
name: "Decode HTML entities",
|
||||
input: "a&b",
|
||||
expected: "a&b",
|
||||
},
|
||||
{
|
||||
name: "Remove $ and decode HTML entities",
|
||||
input: "$a&b$c",
|
||||
expected: "a&bc",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
output := transformEquation(tt.input)
|
||||
require.Equal(t, tt.expected, output)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestEval(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
eq string
|
||||
params map[string]interface{}
|
||||
expected interface{}
|
||||
err bool
|
||||
}{
|
||||
{
|
||||
name: "empty equation",
|
||||
eq: "",
|
||||
params: nil,
|
||||
expected: nil,
|
||||
err: true,
|
||||
},
|
||||
{
|
||||
name: "Valid equation",
|
||||
eq: "2 + 2",
|
||||
params: nil,
|
||||
expected: float64(4),
|
||||
err: false,
|
||||
},
|
||||
{
|
||||
name: "Valid equation with params, valid params",
|
||||
eq: "a + b",
|
||||
params: map[string]interface{}{
|
||||
"a": 2,
|
||||
"b": 3,
|
||||
},
|
||||
expected: float64(5),
|
||||
err: false,
|
||||
},
|
||||
{
|
||||
name: "Valid equation with params, invalid params",
|
||||
eq: "a + b",
|
||||
params: map[string]interface{}{
|
||||
"a": 2,
|
||||
// "b" is missing
|
||||
},
|
||||
expected: nil,
|
||||
err: true,
|
||||
},
|
||||
{
|
||||
name: "Invalid equation",
|
||||
eq: "2 +",
|
||||
params: nil,
|
||||
expected: nil,
|
||||
err: true,
|
||||
},
|
||||
{
|
||||
name: "Real equation from PMT - temperature of unused core",
|
||||
eq: "( ( parameter_0 >> 8 ) & 0xff ) + ( ( parameter_0 & 0xff ) / ( 2 ** 8 ) ) - 64",
|
||||
params: map[string]interface{}{
|
||||
"parameter_0": 0,
|
||||
},
|
||||
expected: float64(-64),
|
||||
err: false,
|
||||
},
|
||||
{
|
||||
name: "Real equation from PMT - temperature of working core",
|
||||
eq: "( ( parameter_0 >> 8 ) & 0xff ) + ( ( parameter_0 & 0xff ) / ( 2 ** 8 ) ) - 64",
|
||||
params: map[string]interface{}{
|
||||
"parameter_0": 23600,
|
||||
},
|
||||
expected: float64(28.1875),
|
||||
err: false,
|
||||
},
|
||||
{
|
||||
name: "Badly parsed real equation from PMT - temperature of working core",
|
||||
eq: "( ( parameter_0 >> 8 ) & 0xff ) + ( ( parameter_0 & 0xff ) / ( 2 ** 8 ) ) - 64",
|
||||
params: map[string]interface{}{
|
||||
"parameter_0": 23600,
|
||||
},
|
||||
expected: nil,
|
||||
err: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result, err := eval(tt.eq, tt.params)
|
||||
if tt.err {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.expected, result)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetTelemSample(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
s sample
|
||||
buf []byte
|
||||
offset uint64
|
||||
expected uint64
|
||||
err bool
|
||||
}{
|
||||
{
|
||||
name: "All bits set",
|
||||
s: sample{Msb: 7, Lsb: 0, mask: 255},
|
||||
buf: []byte{0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
|
||||
offset: 0,
|
||||
expected: 255,
|
||||
},
|
||||
{
|
||||
name: "Middle bits set",
|
||||
s: sample{Msb: 5, Lsb: 2, mask: 60},
|
||||
buf: []byte{0x3c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}, // 0x3c = 00111100 in binary
|
||||
offset: 0,
|
||||
expected: 15,
|
||||
},
|
||||
{
|
||||
name: "Non-zero offset",
|
||||
s: sample{Msb: 7, Lsb: 0, mask: 255},
|
||||
buf: []byte{0x00, 0x00, 0x00, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
|
||||
offset: 3,
|
||||
expected: 255,
|
||||
},
|
||||
{
|
||||
name: "Single bit set",
|
||||
s: sample{Msb: 4, Lsb: 4, mask: 16},
|
||||
buf: []byte{0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}, // 0x10 = 00010000 in binary
|
||||
offset: 0,
|
||||
expected: 1,
|
||||
},
|
||||
{
|
||||
name: "Two bytes set",
|
||||
s: sample{Msb: 14, Lsb: 0, mask: 32767},
|
||||
buf: []byte{0x30, 0x5c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}, // 0x5c30 = 23600 in decimal
|
||||
offset: 0,
|
||||
expected: 23600,
|
||||
},
|
||||
{
|
||||
name: "Offset larger than buffer size",
|
||||
s: sample{Msb: 7, Lsb: 0, mask: 255},
|
||||
buf: []byte{0x00},
|
||||
offset: 5,
|
||||
expected: 0,
|
||||
err: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result, err := getTelemSample(tt.s, tt.buf, tt.offset)
|
||||
if tt.err {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.expected, result)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestInit(t *testing.T) {
|
||||
t.Run("No PmtSpec", func(t *testing.T) {
|
||||
p := &IntelPMT{
|
||||
PmtSpec: "",
|
||||
}
|
||||
err := p.Init()
|
||||
require.ErrorContains(t, err, "pmt spec is empty")
|
||||
})
|
||||
|
||||
t.Run("Incorrect filepath PmtSpec", func(t *testing.T) {
|
||||
p := &IntelPMT{
|
||||
PmtSpec: "/this/path/doesntexist",
|
||||
}
|
||||
err := p.Init()
|
||||
require.ErrorContains(t, err, "provided pmt spec is not readable")
|
||||
})
|
||||
|
||||
t.Run("Incorrect PmtSpec, random letters", func(t *testing.T) {
|
||||
p := &IntelPMT{
|
||||
PmtSpec: "loremipsum",
|
||||
}
|
||||
err := p.Init()
|
||||
require.ErrorContains(t, err, "provided pmt spec is not readable")
|
||||
})
|
||||
|
||||
t.Run("Correct filepath PmtSpec, no pmt/can't read pmt in sysfs", func(t *testing.T) {
|
||||
tmp := t.TempDir()
|
||||
testFile, _ := createTempFile(t, tmp, "test-file", []byte("<pmt><mappings><mapping></mapping></mappings></pmt>"))
|
||||
defer testFile.Close()
|
||||
|
||||
p := &IntelPMT{
|
||||
PmtSpec: testFile.Name(),
|
||||
Log: testutil.Logger{},
|
||||
}
|
||||
err := p.Init()
|
||||
require.ErrorContains(t, err, "error while exploring pmt sysfs")
|
||||
})
|
||||
}
|
||||
|
||||
func TestGather(t *testing.T) {
|
||||
type fields struct {
|
||||
PmtSpec string
|
||||
Log telegraf.Logger
|
||||
pmtTelemetryFiles map[string]pmtFileInfo
|
||||
pmtAggregator map[string]aggregator
|
||||
pmtAggregatorInterface map[string]aggregatorInterface
|
||||
pmtTransformations map[string]map[string]transformation
|
||||
}
|
||||
type testFile struct {
|
||||
guid string
|
||||
content []byte
|
||||
numaNode string
|
||||
pciBdf string
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
fields fields
|
||||
files []testFile
|
||||
expected []telegraf.Metric
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "Incorrect gather, results map has no value for sample",
|
||||
fields: fields{
|
||||
pmtAggregator: map[string]aggregator{
|
||||
"test-guid": {
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 4,
|
||||
Lsb: 4,
|
||||
mask: 16,
|
||||
SampleID: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
pmtAggregatorInterface: map[string]aggregatorInterface{
|
||||
"test-guid": {
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
// missing sampleIDREF
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
files: []testFile{
|
||||
{guid: "test-guid", content: []byte{0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}, numaNode: "0"},
|
||||
},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "Failed Gather, no equation for gathered sample",
|
||||
fields: fields{
|
||||
pmtAggregatorInterface: map[string]aggregatorInterface{
|
||||
"test-guid": {
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{SampleName: "test-sample"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
files: []testFile{
|
||||
{guid: "test-guid"},
|
||||
},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "Correct gather, 2 guids, 2 metrics returned",
|
||||
fields: fields{
|
||||
pmtAggregator: map[string]aggregator{
|
||||
"test-guid": {
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 4,
|
||||
Lsb: 4,
|
||||
mask: 16,
|
||||
SampleID: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"test-guid2": {
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
DatatypeIDRef: "test-datatype2",
|
||||
Msb: 14,
|
||||
Lsb: 0,
|
||||
mask: 32767,
|
||||
SampleID: "test-sample-ref2",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
pmtAggregatorInterface: map[string]aggregatorInterface{
|
||||
"test-guid": {
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"test-guid2": {
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test-sample2",
|
||||
SampleGroup: "test-group2",
|
||||
DatatypeIDRef: "test-datatype2",
|
||||
TransformREF: "test-transform-ref2",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testv",
|
||||
SampleIDREF: "test-sample-ref2",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
pmtTransformations: map[string]map[string]transformation{
|
||||
"test-guid": {
|
||||
"test-transform-ref": {
|
||||
Transform: "testvar + 2",
|
||||
},
|
||||
},
|
||||
"test-guid2": {
|
||||
"test-transform-ref2": {
|
||||
Transform: "( ( $testv >> 8 ) & 0xff ) + ( ( $testv & 0xff ) / ( 2 ** 8 ) ) - 64",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: []telegraf.Metric{
|
||||
testutil.MustMetric(
|
||||
"intel_pmt",
|
||||
map[string]string{
|
||||
"guid": "test-guid",
|
||||
"numa_node": "0",
|
||||
"pci_bdf": "0000:00:0a.0",
|
||||
"sample_name": "test-sample",
|
||||
"sample_group": "test-group",
|
||||
"datatype_idref": "test-datatype",
|
||||
},
|
||||
map[string]interface{}{
|
||||
// 1 from buffer, 2 from equation
|
||||
"value": float64(3),
|
||||
},
|
||||
time.Time{},
|
||||
),
|
||||
testutil.MustMetric(
|
||||
"intel_pmt",
|
||||
map[string]string{
|
||||
"guid": "test-guid2",
|
||||
"numa_node": "1",
|
||||
"pci_bdf": "0001:00:0a.0",
|
||||
"sample_name": "test-sample2",
|
||||
"sample_group": "test-group2",
|
||||
"datatype_idref": "test-datatype2",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"value": float64(28.1875),
|
||||
},
|
||||
time.Time{},
|
||||
),
|
||||
},
|
||||
files: []testFile{
|
||||
{guid: "test-guid", content: []byte{0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}, numaNode: "0", pciBdf: "0000:00:0a.0"},
|
||||
{guid: "test-guid2", content: []byte{0x30, 0x5c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}, numaNode: "1", pciBdf: "0001:00:0a.0"},
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Correct gather, 1 value returned",
|
||||
fields: fields{
|
||||
pmtAggregator: map[string]aggregator{
|
||||
"test-guid": {
|
||||
SampleGroup: []sampleGroup{
|
||||
{
|
||||
SampleID: uint64(0),
|
||||
Sample: []sample{
|
||||
{
|
||||
DatatypeIDRef: "test-datatype",
|
||||
Msb: 4,
|
||||
Lsb: 4,
|
||||
mask: 16,
|
||||
SampleID: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
pmtAggregatorInterface: map[string]aggregatorInterface{
|
||||
"test-guid": {
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test-sample",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
pmtTransformations: map[string]map[string]transformation{
|
||||
"test-guid": {
|
||||
"test-transform-ref": {
|
||||
Transform: "testvar + 2",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: []telegraf.Metric{
|
||||
testutil.MustMetric(
|
||||
"intel_pmt",
|
||||
map[string]string{
|
||||
"guid": "test-guid",
|
||||
"numa_node": "0",
|
||||
"pci_bdf": "0000:00:0a.0",
|
||||
"sample_name": "test-sample",
|
||||
"sample_group": "test-group",
|
||||
"datatype_idref": "test-datatype",
|
||||
},
|
||||
map[string]interface{}{
|
||||
// 1 from buffer, 2 from equation
|
||||
"value": float64(3),
|
||||
},
|
||||
time.Time{},
|
||||
),
|
||||
},
|
||||
files: []testFile{
|
||||
{guid: "test-guid", content: []byte{0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}, numaNode: "0", pciBdf: "0000:00:0a.0"},
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
p := &IntelPMT{
|
||||
PmtSpec: tt.fields.PmtSpec,
|
||||
Log: testutil.Logger{},
|
||||
pmtAggregator: tt.fields.pmtAggregator,
|
||||
pmtTelemetryFiles: tt.fields.pmtTelemetryFiles,
|
||||
pmtAggregatorInterface: tt.fields.pmtAggregatorInterface,
|
||||
pmtTransformations: tt.fields.pmtTransformations,
|
||||
}
|
||||
var acc testutil.Accumulator
|
||||
telemetryFiles := make(map[string]pmtFileInfo)
|
||||
tmp := t.TempDir()
|
||||
for _, file := range tt.files {
|
||||
testFile, _ := createTempFile(t, tmp, "test-file", file.content)
|
||||
telemetryFiles[file.guid] = append(telemetryFiles[file.guid], fileInfo{
|
||||
path: testFile.Name(),
|
||||
numaNode: file.numaNode,
|
||||
pciBdf: file.pciBdf,
|
||||
})
|
||||
}
|
||||
p.pmtTelemetryFiles = telemetryFiles
|
||||
if tt.wantErr {
|
||||
require.Error(t, acc.GatherError(p.Gather))
|
||||
} else {
|
||||
require.NoError(t, acc.GatherError(p.Gather))
|
||||
testutil.RequireMetricsEqual(t, tt.expected, acc.GetTelegrafMetrics(), testutil.IgnoreTime(), testutil.SortMetrics())
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
18
plugins/inputs/intel_pmt/sample.conf
Normal file
18
plugins/inputs/intel_pmt/sample.conf
Normal file
|
@ -0,0 +1,18 @@
|
|||
# Intel Platform Monitoring Technology plugin exposes Intel PMT metrics available through the Intel PMT kernel space.
|
||||
# This plugin ONLY supports Linux.
|
||||
[[inputs.intel_pmt]]
|
||||
## Filepath to PMT XML within local copies of XML files from PMT repository.
|
||||
## The filepath should be absolute.
|
||||
spec = "/home/telegraf/Intel-PMT/xml/pmt.xml"
|
||||
|
||||
## Enable metrics by their datatype.
|
||||
## See the Enabling Metrics section in README for more details.
|
||||
## If empty, all metrics are enabled.
|
||||
## When used, the alternative option samples_enabled should NOT be used.
|
||||
# datatypes_enabled = []
|
||||
|
||||
## Enable metrics by their name.
|
||||
## See the Enabling Metrics section in README for more details.
|
||||
## If empty, all metrics are enabled.
|
||||
## When used, the alternative option datatypes_enabled should NOT be used.
|
||||
# samples_enabled = []
|
34
plugins/inputs/intel_pmt/tags_extraction.go
Normal file
34
plugins/inputs/intel_pmt/tags_extraction.go
Normal file
|
@ -0,0 +1,34 @@
|
|||
//go:build linux && amd64
|
||||
|
||||
package intel_pmt
|
||||
|
||||
import "regexp"
|
||||
|
||||
var (
|
||||
// core in sample name - like C5_
|
||||
coreRegex = regexp.MustCompile("^C([0-9]+)_")
|
||||
// CHA in sample name - like CHA43_
|
||||
chaRegex = regexp.MustCompile("^CHA([0-9]+)_")
|
||||
)
|
||||
|
||||
func (a *aggregatorInterface) extractTagsFromSample() {
|
||||
newAggSample := aggregatorSamples{}
|
||||
for _, sample := range a.AggregatorSamples.AggregatorSample {
|
||||
matches := coreRegex.FindStringSubmatch(sample.SampleName)
|
||||
if len(matches) == 2 {
|
||||
// matches[0] is the exact match in the code
|
||||
// matches[1] is the captured number (in parentheses)
|
||||
sample.core = matches[1]
|
||||
sample.SampleName = coreRegex.ReplaceAllString(sample.SampleName, "")
|
||||
newAggSample.AggregatorSample = append(newAggSample.AggregatorSample, sample)
|
||||
continue
|
||||
}
|
||||
matches = chaRegex.FindStringSubmatch(sample.SampleName)
|
||||
if len(matches) == 2 {
|
||||
sample.cha = matches[1]
|
||||
sample.SampleName = chaRegex.ReplaceAllString(sample.SampleName, "")
|
||||
}
|
||||
newAggSample.AggregatorSample = append(newAggSample.AggregatorSample, sample)
|
||||
}
|
||||
a.AggregatorSamples = newAggSample
|
||||
}
|
155
plugins/inputs/intel_pmt/tags_extraction_test.go
Normal file
155
plugins/inputs/intel_pmt/tags_extraction_test.go
Normal file
|
@ -0,0 +1,155 @@
|
|||
//go:build linux && amd64
|
||||
|
||||
package intel_pmt
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestExtractTagsFromSample(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input aggregatorInterface
|
||||
expected aggregatorInterface
|
||||
}{
|
||||
{
|
||||
name: "Extract core number from sampleName",
|
||||
input: aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "C34_test",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
core: "34",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Extract cha number from sample",
|
||||
input: aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "CHA34_test",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
cha: "34",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "SampleName doesn't contain any matched patterns, no change in sample",
|
||||
input: aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: aggregatorInterface{
|
||||
AggregatorSamples: aggregatorSamples{
|
||||
AggregatorSample: []aggregatorSample{
|
||||
{
|
||||
SampleName: "test",
|
||||
SampleGroup: "test-group",
|
||||
DatatypeIDRef: "test-datatype",
|
||||
TransformREF: "test-transform-ref",
|
||||
TransformInputs: transformInputs{
|
||||
TransformInput: []transformInput{
|
||||
{
|
||||
VarName: "testvar",
|
||||
SampleIDREF: "test-sample-ref",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
tt.input.extractTagsFromSample()
|
||||
require.Equal(t, tt.expected, tt.input)
|
||||
})
|
||||
}
|
||||
}
|
274
plugins/inputs/intel_pmt/xml_parser.go
Normal file
274
plugins/inputs/intel_pmt/xml_parser.go
Normal file
|
@ -0,0 +1,274 @@
|
|||
//go:build linux && amd64
|
||||
|
||||
package intel_pmt
|
||||
|
||||
import (
|
||||
"encoding/xml"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
type pmt struct {
|
||||
XMLName xml.Name `xml:"pmt"`
|
||||
Mappings mappings `xml:"mappings"`
|
||||
}
|
||||
type mappings struct {
|
||||
XMLName xml.Name `xml:"mappings"`
|
||||
Mapping []mapping `xml:"mapping"`
|
||||
}
|
||||
|
||||
type mapping struct {
|
||||
XMLName xml.Name `xml:"mapping"`
|
||||
GUID string `xml:"guid,attr"`
|
||||
XMLSet xmlset `xml:"xmlset"`
|
||||
}
|
||||
|
||||
type xmlset struct {
|
||||
XMLName xml.Name `xml:"xmlset"`
|
||||
Basedir string `xml:"basedir"`
|
||||
Aggregator string `xml:"aggregator"`
|
||||
AggregatorInterface string `xml:"aggregatorinterface"`
|
||||
}
|
||||
|
||||
type aggregator struct {
|
||||
XMLName xml.Name `xml:"Aggregator"`
|
||||
Name string `xml:"name"`
|
||||
SampleGroup []sampleGroup `xml:"SampleGroup"`
|
||||
}
|
||||
|
||||
type sampleGroup struct {
|
||||
XMLName xml.Name `xml:"SampleGroup"`
|
||||
SampleID uint64 `xml:"sampleID,attr"`
|
||||
Sample []sample `xml:"sample"`
|
||||
}
|
||||
|
||||
type sample struct {
|
||||
XMLName xml.Name `xml:"sample"`
|
||||
SampleName string `xml:"name,attr"`
|
||||
DatatypeIDRef string `xml:"datatypeIDREF,attr"`
|
||||
SampleID string `xml:"sampleID,attr"`
|
||||
Lsb uint64 `xml:"lsb"`
|
||||
Msb uint64 `xml:"msb"`
|
||||
|
||||
mask uint64
|
||||
}
|
||||
|
||||
type aggregatorInterface struct {
|
||||
XMLName xml.Name `xml:"AggregatorInterface"`
|
||||
Transformations transformations `xml:"TransFormations"`
|
||||
AggregatorSamples aggregatorSamples `xml:"AggregatorSamples"`
|
||||
}
|
||||
|
||||
type transformations struct {
|
||||
XMLName xml.Name `xml:"TransFormations"`
|
||||
Transformation []transformation `xml:"TransFormation"`
|
||||
}
|
||||
|
||||
type transformation struct {
|
||||
XMLName xml.Name `xml:"TransFormation"`
|
||||
Name string `xml:"name,attr"`
|
||||
TransformID string `xml:"transformID,attr"`
|
||||
Transform string `xml:"transform"`
|
||||
}
|
||||
|
||||
type aggregatorSamples struct {
|
||||
XMLName xml.Name `xml:"AggregatorSamples"`
|
||||
AggregatorSample []aggregatorSample `xml:"T_AggregatorSample"`
|
||||
}
|
||||
|
||||
type aggregatorSample struct {
|
||||
XMLName xml.Name `xml:"T_AggregatorSample"`
|
||||
SampleName string `xml:"sampleName,attr"`
|
||||
SampleGroup string `xml:"sampleGroup,attr"`
|
||||
DatatypeIDRef string `xml:"datatypeIDREF,attr"`
|
||||
TransformInputs transformInputs `xml:"TransFormInputs"`
|
||||
TransformREF string `xml:"transformREF"`
|
||||
|
||||
core string
|
||||
cha string
|
||||
}
|
||||
|
||||
type transformInputs struct {
|
||||
XMLName xml.Name `xml:"TransFormInputs"`
|
||||
TransformInput []transformInput `xml:"TransFormInput"`
|
||||
}
|
||||
|
||||
type transformInput struct {
|
||||
XMLName xml.Name `xml:"TransFormInput"`
|
||||
VarName string `xml:"varName,attr"`
|
||||
SampleIDREF string `xml:"sampleIDREF"`
|
||||
}
|
||||
|
||||
type sourceReader interface {
|
||||
getReadCloser(source string) (io.ReadCloser, error)
|
||||
}
|
||||
|
||||
type fileReader struct{}
|
||||
|
||||
func (fileReader) getReadCloser(source string) (io.ReadCloser, error) {
|
||||
return os.Open(source)
|
||||
}
|
||||
|
||||
// parseXMLs reads and parses PMT XMLs.
|
||||
//
|
||||
// This method retrieves all metadata about known GUIDs from PmtSpec.
|
||||
// Then, it explores PMT sysfs to find all readable "telem" files and their GUIDs.
|
||||
// It then matches found (readable) system GUIDs with GUIDs from metadata and
|
||||
// reads corresponding sets of XMLs.
|
||||
//
|
||||
// Returns:
|
||||
//
|
||||
// error - if PMT spec is empty, if exploring PMT sysfs fails, or if reading XMLs fails.
|
||||
func (p *IntelPMT) parseXMLs() error {
|
||||
err := parseXML(p.PmtSpec, p.reader, &p.pmtMetadata)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(p.pmtMetadata.Mappings.Mapping) == 0 {
|
||||
return errors.New("pmt XML provided contains no mappings")
|
||||
}
|
||||
|
||||
err = p.readXMLs()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
p.pmtTransformations = make(map[string]map[string]transformation)
|
||||
for guid := range p.pmtTelemetryFiles {
|
||||
p.pmtTransformations[guid] = make(map[string]transformation)
|
||||
for _, transform := range p.pmtAggregatorInterface[guid].Transformations.Transformation {
|
||||
p.pmtTransformations[guid][transform.TransformID] = transform
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// readXMLs function reads all XMLs for found GUIDs.
|
||||
//
|
||||
// This method reads two required XMLs for each found GUID,
|
||||
// checks if any of the provided filtering metrics were not found,
|
||||
// and checks if there is at least one non-empty XML set.
|
||||
//
|
||||
// Returns:
|
||||
//
|
||||
// error - error if reading operation failed or if all XMLs are empty.
|
||||
func (p *IntelPMT) readXMLs() error {
|
||||
p.pmtAggregator = make(map[string]aggregator)
|
||||
p.pmtAggregatorInterface = make(map[string]aggregatorInterface)
|
||||
dtMetricsFound := make(map[string]bool)
|
||||
sampleFilterFound := make(map[string]bool)
|
||||
for guid := range p.pmtTelemetryFiles {
|
||||
err := p.getAllXMLData(guid, dtMetricsFound, sampleFilterFound)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed reading XMLs: %w", err)
|
||||
}
|
||||
}
|
||||
for _, dt := range p.DatatypeFilter {
|
||||
if _, ok := dtMetricsFound[dt]; !ok {
|
||||
p.Log.Warnf("Configured datatype metric %q has not been found", dt)
|
||||
}
|
||||
}
|
||||
for _, sm := range p.SampleFilter {
|
||||
if _, ok := sampleFilterFound[sm]; !ok {
|
||||
p.Log.Warnf("Configured sample metric %q has not been found", sm)
|
||||
}
|
||||
}
|
||||
|
||||
return p.verifyNoEmpty()
|
||||
}
|
||||
|
||||
// getAllXMLData retrieves two XMLs for given GUID.
|
||||
//
|
||||
// This method reads where to find the Aggregator and Aggregator interface XMLs
|
||||
// from pmt metadata and reads found XMLs.
|
||||
// This method also filters read XMLs before saving them
|
||||
// and extracts additional tags from the data.
|
||||
//
|
||||
// Parameters:
|
||||
//
|
||||
// guid - GUID saying which XMLs should be read.
|
||||
// dtMetricsFound - a map of found datatype metrics for all GUIDs.
|
||||
// smFound - a map of found sample names for all GUIDs.
|
||||
//
|
||||
// Returns:
|
||||
//
|
||||
// error - if reading XML has failed.
|
||||
func (p *IntelPMT) getAllXMLData(guid string, dtMetricsFound, smFound map[string]bool) error {
|
||||
for _, mapping := range p.pmtMetadata.Mappings.Mapping {
|
||||
if mapping.GUID == guid {
|
||||
basedir := mapping.XMLSet.Basedir
|
||||
guid := mapping.GUID
|
||||
var aggSource, aggInterfaceSource string
|
||||
|
||||
aggSource = filepath.Join(p.pmtBasePath, basedir, mapping.XMLSet.Aggregator)
|
||||
aggInterfaceSource = filepath.Join(p.pmtBasePath, basedir, mapping.XMLSet.AggregatorInterface)
|
||||
|
||||
tAgg := aggregator{}
|
||||
tAggInterface := aggregatorInterface{}
|
||||
|
||||
err := parseXML(aggSource, p.reader, &tAgg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed reading aggregator XML: %w", err)
|
||||
}
|
||||
err = parseXML(aggInterfaceSource, p.reader, &tAggInterface)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed reading aggregator interface XML: %w", err)
|
||||
}
|
||||
if len(p.DatatypeFilter) > 0 {
|
||||
tAgg.filterAggregatorByDatatype(p.DatatypeFilter)
|
||||
tAggInterface.filterAggInterfaceByDatatype(p.DatatypeFilter, dtMetricsFound)
|
||||
}
|
||||
if len(p.SampleFilter) > 0 {
|
||||
tAgg.filterAggregatorBySampleName(p.SampleFilter)
|
||||
tAggInterface.filterAggInterfaceBySampleName(p.SampleFilter, smFound)
|
||||
}
|
||||
tAgg.calculateMasks()
|
||||
p.pmtAggregator[guid] = tAgg
|
||||
tAggInterface.extractTagsFromSample()
|
||||
p.pmtAggregatorInterface[guid] = tAggInterface
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *aggregator) calculateMasks() {
|
||||
for i := range a.SampleGroup {
|
||||
for j, sample := range a.SampleGroup[i].Sample {
|
||||
mask := computeMask(sample.Msb, sample.Lsb)
|
||||
a.SampleGroup[i].Sample[j].mask = mask
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func computeMask(msb, lsb uint64) uint64 {
|
||||
msbMask := uint64(0xffffffffffffffff) & ((1 << (msb + 1)) - 1)
|
||||
lsbMask := uint64(0xffffffffffffffff) & (1<<lsb - 1)
|
||||
return msbMask & (^lsbMask)
|
||||
}
|
||||
|
||||
func parseXML(source string, sr sourceReader, v interface{}) error {
|
||||
if sr == nil {
|
||||
return errors.New("xml reader has not been initialized")
|
||||
}
|
||||
reader, err := sr.getReadCloser(source)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error reading source %q: %w", source, err)
|
||||
}
|
||||
defer reader.Close()
|
||||
|
||||
parser := xml.NewDecoder(reader)
|
||||
parser.AutoClose = xml.HTMLAutoClose
|
||||
parser.Entity = xml.HTMLEntity
|
||||
// There are "&" in XMLs in entity references.
|
||||
// Parser sees it as not allowed characters.
|
||||
// Strict mode disabled to handle that.
|
||||
parser.Strict = false
|
||||
err = parser.Decode(v)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error decoding an XML %q: %w", source, err)
|
||||
}
|
||||
return nil
|
||||
}
|
155
plugins/inputs/intel_pmt/xml_parser_test.go
Normal file
155
plugins/inputs/intel_pmt/xml_parser_test.go
Normal file
|
@ -0,0 +1,155 @@
|
|||
//go:build linux && amd64
|
||||
|
||||
package intel_pmt
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"io"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
type mockReader struct {
|
||||
data []byte
|
||||
err error
|
||||
}
|
||||
|
||||
func (mr mockReader) getReadCloser(_ string) (io.ReadCloser, error) {
|
||||
if mr.err != nil {
|
||||
return nil, mr.err
|
||||
}
|
||||
return io.NopCloser(bytes.NewReader(mr.data)), nil
|
||||
}
|
||||
|
||||
func TestParseXMLs(t *testing.T) {
|
||||
t.Run("Correct filepath PmtSpec, empty spec", func(t *testing.T) {
|
||||
testFile, err := os.CreateTemp(t.TempDir(), "test-file")
|
||||
if err != nil {
|
||||
t.Fatalf("error creating a temporary file: %v %v", testFile.Name(), err)
|
||||
}
|
||||
defer testFile.Close()
|
||||
|
||||
p := &IntelPMT{
|
||||
PmtSpec: testFile.Name(),
|
||||
reader: fileReader{},
|
||||
}
|
||||
err = p.parseXMLs()
|
||||
require.ErrorContains(t, err, "error decoding an XML")
|
||||
})
|
||||
}
|
||||
|
||||
func TestParseXML(t *testing.T) {
|
||||
type Person struct {
|
||||
Name string `xml:"name"`
|
||||
Age int `xml:"age"`
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
source string
|
||||
sr sourceReader
|
||||
v interface{}
|
||||
err bool
|
||||
}{
|
||||
{
|
||||
name: "Valid XML",
|
||||
source: "test",
|
||||
sr: mockReader{data: []byte(`<Person><name>John</name><age>30</age></Person>`), err: nil},
|
||||
v: &Person{},
|
||||
err: false,
|
||||
},
|
||||
{
|
||||
name: "Empty XML",
|
||||
source: "test",
|
||||
sr: mockReader{data: []byte(``), err: nil},
|
||||
v: &Person{},
|
||||
err: true,
|
||||
},
|
||||
{
|
||||
name: "Nil interface parameter",
|
||||
source: "test",
|
||||
sr: mockReader{data: []byte(`<Person><name>John</name><age>30</age></Person>`), err: nil},
|
||||
v: nil,
|
||||
err: true,
|
||||
},
|
||||
{
|
||||
name: "Error from SourceReader",
|
||||
source: "test",
|
||||
sr: mockReader{data: nil, err: errors.New("mock error")},
|
||||
v: &Person{},
|
||||
err: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
err := parseXML(tt.source, tt.sr, tt.v)
|
||||
if tt.err {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestReadXMLs(t *testing.T) {
|
||||
t.Run("Test single PMT GUID, no XMLs found", func(t *testing.T) {
|
||||
p := &IntelPMT{
|
||||
pmtMetadata: &pmt{
|
||||
Mappings: mappings{
|
||||
Mapping: []mapping{
|
||||
{GUID: "abc"},
|
||||
},
|
||||
},
|
||||
},
|
||||
pmtTelemetryFiles: map[string]pmtFileInfo{
|
||||
"abc": []fileInfo{{path: "doesn't-exist"}},
|
||||
},
|
||||
reader: fileReader{},
|
||||
}
|
||||
err := p.readXMLs()
|
||||
require.Error(t, err)
|
||||
require.ErrorContains(t, err, "failed reading XMLs")
|
||||
})
|
||||
|
||||
t.Run("Test single PMT GUID, aggregator interface empty", func(t *testing.T) {
|
||||
tmp := t.TempDir()
|
||||
|
||||
bufAgg := []byte("<TELEM:Aggregator><TELEM:SampleGroup></TELEM:SampleGroup></TELEM:Aggregator>")
|
||||
testAgg, aggName := createTempFile(t, tmp, "test-agg", bufAgg)
|
||||
defer testAgg.Close()
|
||||
|
||||
bufAggInterface := []byte("<TELI:AggregatorInterface></TELI:AggregatorInterface>")
|
||||
testAggInterface, aggInterfaceName := createTempFile(t, tmp, "test-aggInterface", bufAggInterface)
|
||||
defer testAggInterface.Close()
|
||||
|
||||
p := &IntelPMT{
|
||||
pmtBasePath: tmp,
|
||||
pmtMetadata: &pmt{
|
||||
Mappings: mappings{
|
||||
Mapping: []mapping{
|
||||
{
|
||||
GUID: "abc",
|
||||
XMLSet: xmlset{
|
||||
Aggregator: aggName.Name(),
|
||||
AggregatorInterface: aggInterfaceName.Name(),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
// This is done just so we enter the loop
|
||||
pmtTelemetryFiles: map[string]pmtFileInfo{
|
||||
"abc": []fileInfo{{path: testAgg.Name()}},
|
||||
},
|
||||
reader: fileReader{},
|
||||
}
|
||||
|
||||
err := p.readXMLs()
|
||||
require.ErrorContains(t, err, "all aggregator interface XMLs are empty")
|
||||
})
|
||||
}
|
Loading…
Add table
Add a link
Reference in a new issue