Adding upstream version 1.12.
Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
parent
3b95ae912c
commit
ac60c09ef6
457 changed files with 159628 additions and 0 deletions
268
Documentation/nvme-wdc-vs-smart-add-log.txt
Normal file
268
Documentation/nvme-wdc-vs-smart-add-log.txt
Normal file
|
@ -0,0 +1,268 @@
|
|||
nvme-wdc-vs-smart-add-log(1)
|
||||
============================
|
||||
|
||||
NAME
|
||||
----
|
||||
nvme-wdc-vs-smart-add-log - Send NVMe WDC vs-smart-add-log Vendor Unique Command, return result
|
||||
|
||||
SYNOPSIS
|
||||
--------
|
||||
[verse]
|
||||
'nvme wdc vs-smart-add-log' <device> [--interval=<NUM>, -i <NUM>] [--output-format=<normal|json> -o <normal|json>]
|
||||
|
||||
DESCRIPTION
|
||||
-----------
|
||||
For the NVMe device given, send a Vendor Unique WDC vs-smart-add-log command and
|
||||
provide the additional smart log. The --interval option will return performance
|
||||
statistics from the specified reporting interval.
|
||||
|
||||
The <device> parameter is mandatory and may be either the NVMe character
|
||||
device (ex: /dev/nvme0).
|
||||
|
||||
This will only work on WDC devices supporting this feature.
|
||||
Results for any other device are undefined.
|
||||
|
||||
On success it returns 0, error code otherwise.
|
||||
|
||||
OPTIONS
|
||||
-------
|
||||
-i <NUM>::
|
||||
--interval=<NUM>::
|
||||
Return the statistics from specific interval, defaults to 14
|
||||
|
||||
-o <format>::
|
||||
--output-format=<format>::
|
||||
Set the reporting format to 'normal', or
|
||||
'json'. Only one output format can be used at a time.
|
||||
Default is normal.
|
||||
|
||||
Valid Interval values and description :-
|
||||
|
||||
[cols="2*", frame="topbot", align="center", options="header"]
|
||||
|===
|
||||
|Value |Description
|
||||
|
||||
|*1*
|
||||
|Most recent five (5) minute accumulated set.
|
||||
|
||||
|*2-12*
|
||||
|Previous five (5) minute accumulated sets.
|
||||
|
||||
|*13*
|
||||
|The accumulated total of sets 1 through 12 that contain the previous hour of
|
||||
accumulated statistics.
|
||||
|
||||
|*14*
|
||||
|The statistical set accumulated since power-up.
|
||||
|
||||
|*15*
|
||||
|The statistical set accumulated during the entire lifetime of the device.
|
||||
|===
|
||||
|
||||
CA Log Page Data Output Explanation
|
||||
-----------------------------------
|
||||
[cols="2*", frame="topbot", align="center", options="header"]
|
||||
|===
|
||||
|Field |Description
|
||||
|
||||
|*Physical NAND bytes written.*
|
||||
|The number of bytes written to NAND. 16 bytes - hi/lo
|
||||
|
||||
|*Physical NAND bytes read*
|
||||
|The number of bytes read from NAND. 16 bytes - hi/lo
|
||||
|
||||
|*Bad NAND Block Count*
|
||||
|Raw and normalized count of the number of NAND blocks that have been
|
||||
retired after the drives manufacturing tests (i.e. grown back blocks).
|
||||
2 bytes normalized, 6 bytes raw count
|
||||
|
||||
|*Uncorrectable Read Error Count*
|
||||
|Total count of NAND reads that were not correctable by read retries, all
|
||||
levels of ECC, or XOR (as applicable). 8 bytes
|
||||
|
||||
|*Soft ECC Error Count*
|
||||
|Total count of NAND reads that were not correctable by read retries, or
|
||||
first-level ECC. 8 bytes
|
||||
|
||||
|*SSD End to End Detection Count*
|
||||
|A count of the detected errors by the SSD end to end error correction which
|
||||
includes DRAM, SRAM, or other storage element ECC/CRC protection mechanism (not
|
||||
NAND ECC). 4 bytes
|
||||
|
||||
|*SSD End to End Correction Count*
|
||||
|A count of the corrected errors by the SSD end to end error correction which
|
||||
includes DRAM, SRAM, or other storage element ECC/CRC protection mechanism (not
|
||||
NAND ECC). 4 bytes
|
||||
|
||||
|*System Data % Used*
|
||||
|A normalized cumulative count of the number of erase cycles per block since
|
||||
leaving the factory for the system (FW and metadata) area. Starts at 0 and
|
||||
increments. 100 indicates that the estimated endurance has been consumed.
|
||||
|
||||
|*User Data Max Erase Count*
|
||||
|The maximum erase count across all NAND blocks in the drive. 4 bytes
|
||||
|
||||
|*User Data Min Erase Count*
|
||||
|The minimum erase count across all NAND blocks in the drive. 4 bytes
|
||||
|
||||
|*Refresh Count*
|
||||
|A count of the number of blocks that have been re-allocated due to
|
||||
background operations only. 8 bytes
|
||||
|
||||
|*Program Fail Count*
|
||||
|Raw and normalized count of total program failures. Normalized count
|
||||
starts at 100 and shows the percent of remaining allowable failures.
|
||||
2 bytes normalized, 6 bytes raw count
|
||||
|
||||
|*User Data Erase Fail Count*
|
||||
|Raw and normalized count of total erase failures in the user area.
|
||||
Normalized count starts at 100 and shows the percent of remaining
|
||||
allowable failures. 2 bytes normalized, 6 bytes raw count
|
||||
|
||||
|*System Area Erase Fail Count*
|
||||
|Raw and normalized count of total erase failures in the system area.
|
||||
Normalized count starts at 100 and shows the percent of remaining
|
||||
allowable failures. 2 bytes normalized, 6 bytes raw count
|
||||
|
||||
|*Thermal Throttling Status*
|
||||
|The current status of thermal throttling (enabled or disabled).
|
||||
2 bytes
|
||||
|
||||
|*Thermal Throttling Count*
|
||||
|A count of the number of thermal throttling events. 2 bytes
|
||||
|
||||
|*PCIe Correctable Error Count*
|
||||
|Summation counter of all PCIe correctable errors (Bad TLP, Bad
|
||||
DLLP, Receiver error, Replay timeouts, Replay rollovers). 8 bytes
|
||||
|===
|
||||
|
||||
|
||||
C1 Log Page Data Output Explanation
|
||||
-----------------------------------
|
||||
[cols="2*", frame="topbot", align="center", options="header"]
|
||||
|===
|
||||
|Field |Description
|
||||
|
||||
|*Host Read Commands*
|
||||
|Number of host read commands received during the reporting period.
|
||||
|
||||
|*Host Read Blocks*
|
||||
|Number of 512-byte blocks requested during the reporting period.
|
||||
|
||||
|*Average Read Size*
|
||||
|Average Read size is calculated using (Host Read Blocks/Host Read Commands).
|
||||
|
||||
|*Host Read Cache Hit Commands*
|
||||
|Number of host read commands that serviced entirely from the on-board read
|
||||
cache during the reporting period. No access to the NAND flash memory was required.
|
||||
This count is only updated if the entire command was serviced from the cache memory.
|
||||
|
||||
|*Host Read Cache Hit Percentage*
|
||||
|Percentage of host read commands satisfied from the cache.
|
||||
|
||||
|*Host Read Cache Hit Blocks*
|
||||
|Number of 512-byte blocks of data that have been returned for Host Read Cache Hit
|
||||
Commands during the reporting period. This count is only updated with the blocks
|
||||
returned for host read commands that were serviced entirely from cache memory.
|
||||
|
||||
|*Average Read Cache Hit Size*
|
||||
|Average size of read commands satisfied from the cache.
|
||||
|
||||
|*Host Read Commands Stalled*
|
||||
|Number of host read commands that were stalled due to a lack of resources within
|
||||
the SSD during the reporting period (NAND flash command queue full, low cache page count,
|
||||
cache page contention, etc.). Commands are not considered stalled if the only reason for
|
||||
the delay was waiting for the data to be physically read from the NAND flash. It is normal
|
||||
to expect this count to equal zero on heavily utilized systems.
|
||||
|
||||
|*Host Read Commands Stalled Percentage*
|
||||
|Percentage of read commands that were stalled. If the figure is consistently high,
|
||||
then consideration should be given to spreading the data across multiple SSDs.
|
||||
|
||||
|*Host Write Commands*
|
||||
|Number of host write commands received during the reporting period.
|
||||
|
||||
|*Host Write Blocks*
|
||||
|Number of 512-byte blocks written during the reporting period.
|
||||
|
||||
|*Average Write Size*
|
||||
|Average Write size calculated using (Host Write Blocks/Host Write Commands).
|
||||
|
||||
|*Host Write Odd Start Commands*
|
||||
|Number of host write commands that started on a non-aligned boundary during
|
||||
the reporting period. The size of the boundary alignment is normally 4K; therefore
|
||||
this returns the number of commands that started on a non-4K aligned boundary.
|
||||
The SSD requires slightly more time to process non-aligned write commands than it
|
||||
does to process aligned write commands.
|
||||
|
||||
|*Host Write Odd Start Commands Percentage*
|
||||
|Percentage of host write commands that started on a non-aligned boundary. If this
|
||||
figure is equal to or near 100%, and the NAND Read Before Write value is also high,
|
||||
then the user should investigate the possibility of offsetting the file system. For
|
||||
Microsoft Windows systems, the user can use Diskpart. For Unix-based operating systems,
|
||||
there is normally a method whereby file system partitions can be placed where required.
|
||||
|
||||
|*Host Write Odd End Commands*
|
||||
|Number of host write commands that ended on a non-aligned boundary during the
|
||||
reporting period. The size of the boundary alignment is normally 4K; therefore this
|
||||
returns the number of commands that ended on a non-4K aligned boundary.
|
||||
|
||||
|*Host Write Odd End Commands Percentage*
|
||||
|Percentage of host write commands that ended on a non-aligned boundary.
|
||||
|
||||
|*Host Write Commands Stalled*
|
||||
|Number of host write commands that were stalled due to a lack of resources within the
|
||||
SSD during the reporting period. The most likely cause is that the write data was being
|
||||
received faster than it could be saved to the NAND flash memory. If there was a large
|
||||
volume of read commands being processed simultaneously, then other causes might include
|
||||
the NAND flash command queue being full, low cache page count, or cache page contention, etc.
|
||||
It is normal to expect this count to be non-zero on heavily utilized systems.
|
||||
|
||||
|*Host Write Commands Stalled Percentage*
|
||||
|Percentage of write commands that were stalled. If the figure is consistently high, then
|
||||
consideration should be given to spreading the data across multiple SSDs.
|
||||
|
||||
|*NAND Read Commands*
|
||||
|Number of read commands issued to the NAND devices during the reporting period.
|
||||
This figure will normally be much higher than the host read commands figure, as the data
|
||||
needed to satisfy a single host read command may be spread across several NAND flash devices.
|
||||
|
||||
|*NAND Read Blocks*
|
||||
|Number of 512-byte blocks requested from NAND flash devices during the reporting period.
|
||||
This figure would normally be about the same as the host read blocks figure
|
||||
|
||||
|*Average NAND Read Size*
|
||||
|Average size of NAND read commands.
|
||||
|
||||
|*NAND Write Commands*
|
||||
|Number of write commands issued to the NAND devices during the reporting period.
|
||||
There is no real correlation between the number of host write commands issued and the
|
||||
number of NAND Write Commands.
|
||||
|
||||
|*NAND Write Blocks*
|
||||
|Number of 512-byte blocks written to the NAND flash devices during the reporting period.
|
||||
This figure would normally be about the same as the host write blocks figure.
|
||||
|
||||
|*Average NAND Write Size*
|
||||
|Average size of NAND write commands. This figure should never be greater than 128K, as
|
||||
this is the maximum size write that is ever issued to a NAND device.
|
||||
|
||||
|*NAND Read Before Write*
|
||||
|This is the number of read before write operations that were required to process
|
||||
non-aligned host write commands during the reporting period. See Host Write Odd Start
|
||||
Commands and Host Write Odd End Commands. NAND Read Before Write operations have
|
||||
a detrimental effect on the overall performance of the device.
|
||||
|===
|
||||
|
||||
|
||||
EXAMPLES
|
||||
--------
|
||||
* Has the program issue WDC vs-smart-add-log Vendor Unique Command with default interval (14) :
|
||||
+
|
||||
------------
|
||||
# nvme wdc vs-smart-add-log /dev/nvme0
|
||||
------------
|
||||
|
||||
NVME
|
||||
----
|
||||
Part of the nvme-user suite.
|
Loading…
Add table
Add a link
Reference in a new issue