pmc.westmereuc man page on PC-BSD

Man page or keyword search:  
man Server   9747 pages
apropos Keyword Search (all sections)
Output format
PC-BSD logo
[printable version]

PMC.WESTMEREUC(3)	 BSD Library Functions Manual	     PMC.WESTMEREUC(3)

NAME
     pmc.westmere — uncore measurement events for Intel Westmere family CPUs

LIBRARY
     Performance Counters Library (libpmc, -lpmc)

SYNOPSIS
     #include <pmc.h>

DESCRIPTION
     Intel Westmere CPUs contain PMCs conforming to version 2 of the Intel
     performance measurement architecture.  These CPUs contain two classes of
     PMCs:

     PMC_CLASS_UCF     Fixed-function counters that count only one hardware
		       event per counter.

     PMC_CLASS_UCP     Programmable counters that may be configured to count
		       one of a defined set of hardware events.

     The number of PMCs available in each class and their widths need to be
     determined at run time by calling pmc_cpuinfo(3).

     Intel Westmere PMCs are documented in "Volume 3B: System Programming
     Guide, Part 2", Intel(R) 64 and IA-32 Architectures Software Developes
     Manual, Order Number: 253669-033US, Intel Corporation, December 2009.

   WESTMERE UNCORE FIXED FUNCTION PMCS
     These PMCs and their supported events are documented in pmc.ucf(3).  Not
     all CPUs in this family implement fixed-function counters.

   WESTMERE UNCORE PROGRAMMABLE PMCS
     The programmable PMCs support the following capabilities:

     Capability		  Support
     PMC_CAP_CASCADE	  No
     PMC_CAP_EDGE	  Yes
     PMC_CAP_INTERRUPT	  No
     PMC_CAP_INVERT	  Yes
     PMC_CAP_READ	  Yes
     PMC_CAP_PRECISE	  No
     PMC_CAP_SYSTEM	  No
     PMC_CAP_TAGGING	  No
     PMC_CAP_THRESHOLD	  Yes
     PMC_CAP_USER	  No
     PMC_CAP_WRITE	  Yes

   Event Qualifiers
     Event specifiers for these PMCs support the following common qualifiers:

     cmask=value
	     Configure the PMC to increment only if the number of configured
	     events measured in a cycle is greater than or equal to value.

     edge    Configure the PMC to count the number of de-asserted to asserted
	     transitions of the conditions expressed by the other qualifiers.
	     If specified, the counter will increment only once whenever a
	     condition becomes true, irrespective of the number of clocks dur‐
	     ing which the condition remains true.

     inv     Invert the sense of comparison when the “cmask” qualifier is
	     present, making the counter increment when the number of events
	     per cycle is less than the value specified by the “cmask” quali‐
	     fier.

   Event Specifiers (Programmable PMCs)
     Westmere uncore programmable PMCs support the following events:

     GQ_CYCLES_FULL.READ_TRACKER
	     (Event 00H, Umask 01H) Uncore cycles Global Queue read tracker is
	     full.

     GQ_CYCLES_FULL.WRITE_TRACKER
	     (Event 00H, Umask 02H) Uncore cycles Global Queue write tracker
	     is full.

     GQ_CYCLES_FULL.PEER_PROBE_TRACKER
	     (Event 00H, Umask 04H) Uncore cycles Global Queue peer probe
	     tracker is full. The peer probe tracker queue tracks snoops from
	     the IOH and remote sockets.

     GQ_CYCLES_NOT_EMPTY.READ_TRACKER
	     (Event 01H, Umask 01H) Uncore cycles were Global Queue read
	     tracker has at least one valid entry.

     GQ_CYCLES_NOT_EMPTY.WRITE_TRACKER
	     (Event 01H, Umask 02H) Uncore cycles were Global Queue write
	     tracker has at least one valid entry.

     GQ_CYCLES_NOT_EMPTY.PEER_PROBE_TRACKER
	     (Event 01H, Umask 04H) Uncore cycles were Global Queue peer probe
	     tracker has at least one valid entry. The peer probe tracker
	     queue tracks IOH and remote socket snoops.

     GQ_OCCUPANCY.READ_TRACKER
	     (Event 02H, Umask 01H) Increments the number of queue entries
	     (code read, data read, and RFOs) in the tread tracker. The GQ
	     read tracker allocate to deallocate occupancy count is divided by
	     the count to obtain the average read tracker latency.

     GQ_ALLOC.READ_TRACKER
	     (Event 03H, Umask 01H) Counts the number of tread tracker allo‐
	     cate to deallocate entries. The GQ read tracker allocate to deal‐
	     locate occupancy count is divided by the count to obtain the
	     average read tracker latency.

     GQ_ALLOC.RT_L3_MISS
	     (Event 03H, Umask 02H) Counts the number GQ read tracker entries
	     for which a full cache line read has missed the L3. The GQ read
	     tracker L3 miss to fill occupancy count is divided by this count
	     to obtain the average cache line read L3 miss latency.  The
	     latency represents the time after which the L3 has determined
	     that the cache line has missed. The time between a GQ read
	     tracker allocation and the L3 determining that the cache line has
	     missed is the average L3 hit latency.  The total L3 cache line
	     read miss latency is the hit latency + L3 miss latency.

     GQ_ALLOC.RT_TO_L3_RESP
	     (Event 03H, Umask 04H) Counts the number of GQ read tracker
	     entries that are allocated in the read tracker queue that hit or
	     miss the L3. The GQ read tracker L3 hit occupancy count is
	     divided by this count to obtain the average L3 hit latency.

     GQ_ALLOC.RT_TO_RTID_ACQUIRED
	     (Event 03H, Umask 08H) Counts the number of GQ read tracker
	     entries that are allocated in the read tracker, have missed in
	     the L3 and have not acquired a Request Transaction ID.  The GQ
	     read tracker L3 miss to RTID acquired occupancy count is divided
	     by this count to obtain the average latency for a read L3 miss to
	     acquire an RTID.

     GQ_ALLOC.WT_TO_RTID_ACQUIRED
	     (Event 03H, Umask 10H) Counts the number of GQ write tracker
	     entries that are allocated in the write tracker, have missed in
	     the L3 and have not acquired a Request Transaction ID.	The GQ
	     write tracker L3 miss to RTID occupancy count is divided by this
	     count to obtain the average latency for a write L3 miss to
	     acquire an RTID.

     GQ_ALLOC.WRITE_TRACKER
	     (Event 03H, Umask 20H) Counts the number of GQ write tracker
	     entries that are allocated in the write tracker queue that miss
	     the L3. The GQ write tracker occupancy count is divided by the
	     this count to obtain the average L3 write miss latency.

     GQ_ALLOC.PEER_PROBE_TRACKER
	     (Event 03H, Umask 40H) Counts the number of GQ peer probe tracker
	     (snoop) entries that are allocated in the peer probe tracker
	     queue that miss the L3. The GQ peer probe occupancy count is
	     divided by this count to obtain the average L3 peer probe miss
	     latency.

     GQ_DATA.FROM_QPI
	     (Event 04H, Umask 01H) Cycles Global Queue Quickpath Interface
	     input data port is busy importing data from the Quickpath Inter‐
	     face. Each cycle the input port can transfer 8 or 16 bytes of
	     data.

     GQ_DATA.FROM_QMC
	     (Event 04H, Umask 02H) Cycles Global Queue Quickpath Memory
	     Interface input data port is busy importing data from the Quick‐
	     path Memory Interface. Each cycle the input port can transfer 8
	     or 16 bytes of data.

     GQ_DATA.FROM_L3
	     (Event 04H, Umask 04H) Cycles GQ L3 input data port is busy
	     importing data from the Last Level Cache. Each cycle the input
	     port can transfer 32 bytes of data.

     GQ_DATA.FROM_CORES_02
	     (Event 04H, Umask 08H) Cycles GQ Core 0 and 2 input data port is
	     busy importing data from processor cores 0 and 2. Each cycle the
	     input port can transfer 32 bytes of data.

     GQ_DATA.FROM_CORES_13
	     (Event 04H, Umask 10H) Cycles GQ Core 1 and 3 input data port is
	     busy importing data from processor cores 1 and 3. Each cycle the
	     input port can transfer 32 bytes of data.

     GQ_DATA.TO_QPI_QMC
	     (Event 05H, Umask 01H) Cycles GQ QPI and QMC output data port is
	     busy sending data to the Quickpath Interface or Quickpath Memory
	     Interface. Each cycle the output port can transfer 32 bytes of
	     data.

     GQ_DATA.TO_L3
	     (Event 05H, Umask 02H) Cycles GQ L3 output data port is busy
	     sending data to the Last Level Cache.  Each cycle the output port
	     can transfer 32 bytes of data.

     GQ_DATA.TO_CORES
	     (Event 05H, Umask 04H) Cycles GQ Core output data port is busy
	     sending data to the Cores. Each cycle the output port can trans‐
	     fer 32 bytes of data.

     SNP_RESP_TO_LOCAL_HOME.I_STATE
	     (Event 06H, Umask 01H) Number of snoop responses to the local
	     home that L3 does not have the referenced cache line.

     SNP_RESP_TO_LOCAL_HOME.S_STATE
	     (Event 06H, Umask 02H) Number of snoop responses to the local
	     home that L3 has the referenced line cached in the S state.

     SNP_RESP_TO_LOCAL_HOME.FWD_S_STATE
	     (Event 06H, Umask 04H) Number of responses to code or data read
	     snoops to the local home that the L3 has the referenced cache
	     line in the E state. The L3 cache line state is changed to the S
	     state and the line is forwarded to the local home in the S state.

     SNP_RESP_TO_LOCAL_HOME.FWD_I_STATE
	     (Event 06H, Umask 08H) Number of responses to read invalidate
	     snoops to the local home that the L3 has the referenced cache
	     line in the M state. The L3 cache line state is invalidated and
	     the line is forwarded to the local home in the M state.

     SNP_RESP_TO_LOCAL_HOME.CONFLICT
	     (Event 06H, Umask 10H) Number of conflict snoop responses sent to
	     the local home.

     SNP_RESP_TO_LOCAL_HOME.WB
	     (Event 06H, Umask 20H) Number of responses to code or data read
	     snoops to the local home that the L3 has the referenced line
	     cached in the M state.

     SNP_RESP_TO_REMOTE_HOME.I_STATE
	     (Event 07H, Umask 01H) Number of snoop responses to a remote home
	     that L3 does not have the referenced cache line.

     SNP_RESP_TO_REMOTE_HOME.S_STATE
	     (Event 07H, Umask 02H) Number of snoop responses to a remote home
	     that L3 has the referenced line cached in the S state.

     SNP_RESP_TO_REMOTE_HOME.FWD_S_STATE
	     (Event 07H, Umask 04H) Number of responses to code or data read
	     snoops to a remote home that the L3 has the referenced cache line
	     in the E state. The L3 cache line state is changed to the S state
	     and the line is forwarded to the remote home in the S state.

     SNP_RESP_TO_REMOTE_HOME.FWD_I_STATE
	     (Event 07H, Umask 08H) Number of responses to read invalidate
	     snoops to a remote home that the L3 has the referenced cache line
	     in the M state. The L3 cache line state is invalidated and the
	     line is forwarded to the remote home in the M state.

     SNP_RESP_TO_REMOTE_HOME.CONFLICT
	     (Event 07H, Umask 10H) Number of conflict snoop responses sent to
	     the local home.

     SNP_RESP_TO_REMOTE_HOME.WB
	     (Event 07H, Umask 20H) Number of responses to code or data read
	     snoops to a remote home that the L3 has the referenced line
	     cached in the M state.

     SNP_RESP_TO_REMOTE_HOME.HITM
	     (Event 07H, Umask 24H) Number of HITM snoop responses to a remote
	     home.

     L3_HITS.READ
	     (Event 08H, Umask 01H) Number of code read, data read and RFO
	     requests that hit in the L3.

     L3_HITS.WRITE
	     (Event 08H, Umask 02H) Number of writeback requests that hit in
	     the L3. Writebacks from the cores will always result in L3 hits
	     due to the inclusive property of the L3.

     L3_HITS.PROBE
	     (Event 08H, Umask 04H) Number of snoops from IOH or remote sock‐
	     ets that hit in the L3.

     L3_HITS.ANY
	     (Event 08H, Umask 03H) Number of reads and writes that hit the
	     L3.

     L3_MISS.READ
	     (Event 09H, Umask 01H) Number of code read, data read and RFO
	     requests that miss the L3.

     L3_MISS.WRITE
	     (Event 09H, Umask 02H) Number of writeback requests that miss the
	     L3. Should always be zero as writebacks from the cores will
	     always result in L3 hits due to the inclusive property of the L3.

     L3_MISS.PROBE
	     (Event 09H, Umask 04H) Number of snoops from IOH or remote sock‐
	     ets that miss the L3.

     L3_MISS.ANY
	     (Event 09H, Umask 03H) Number of reads and writes that miss the
	     L3.

     L3_LINES_IN.M_STATE
	     (Event 0AH, Umask 01H) Counts the number of L3 lines allocated in
	     M state. The only time a cache line is allocated in the M state
	     is when the line was forwarded in M state is forwarded due to a
	     Snoop Read Invalidate Own request.

     L3_LINES_IN.E_STATE
	     (Event 0AH, Umask 02H) Counts the number of L3 lines allocated in
	     E state.

     L3_LINES_IN.S_STATE
	     (Event 0AH, Umask 04H) Counts the number of L3 lines allocated in
	     S state.

     L3_LINES_IN.F_STATE
	     (Event 0AH, Umask 08H) Counts the number of L3 lines allocated in
	     F state.

     L3_LINES_IN.ANY
	     (Event 0AH, Umask 0FH) Counts the number of L3 lines allocated in
	     any state.

     L3_LINES_OUT.M_STATE
	     (Event 0BH, Umask 01H) Counts the number of L3 lines victimized
	     that were in the M state. When the victim cache line is in M
	     state, the line is written to its home cache agent which can be
	     either local or remote.

     L3_LINES_OUT.E_STATE
	     (Event 0BH, Umask 02H) Counts the number of L3 lines victimized
	     that were in the E state.

     L3_LINES_OUT.S_STATE
	     (Event 0BH, Umask 04H) Counts the number of L3 lines victimized
	     that were in the S state.

     L3_LINES_OUT.I_STATE
	     (Event 0BH, Umask 08H) Counts the number of L3 lines victimized
	     that were in the I state.

     L3_LINES_OUT.F_STATE
	     (Event 0BH, Umask 10H) Counts the number of L3 lines victimized
	     that were in the F state.

     L3_LINES_OUT.ANY
	     (Event 0BH, Umask 1FH) Counts the number of L3 lines victimized
	     in any state.

     GQ_SNOOP.GOTO_S
	     (Event 0CH, Umask 01H) Counts the number of remote snoops that
	     have requested a cache line be set to the S state.

     GQ_SNOOP.GOTO_I
	     (Event 0CH, Umask 02H) Counts the number of remote snoops that
	     have requested a cache line be set to the I state.

     GQ_SNOOP.GOTO_S_HIT_E
	     (Event 0CH, Umask 04H) Counts the number of remote snoops that
	     have requested a cache line be set to the S state from E state.
	     Requires writing MSR 301H with mask = 2H

     GQ_SNOOP.GOTO_S_HIT_F
	     (Event 0CH, Umask 04H) Counts the number of remote snoops that
	     have requested a cache line be set to the S state from F (for‐
	     ward) state.  Requires writing MSR 301H with mask = 8H

     GQ_SNOOP.GOTO_S_HIT_M
	     (Event 0CH, Umask 04H) Counts the number of remote snoops that
	     have requested a cache line be set to the S state from M state.
	     Requires writing MSR 301H with mask = 1H

     GQ_SNOOP.GOTO_S_HIT_S
	     (Event 0CH, Umask 04H) Counts the number of remote snoops that
	     have requested a cache line be set to the S state from S state.
	     Requires writing MSR 301H with mask = 4H

     GQ_SNOOP.GOTO_I_HIT_E
	     (Event 0CH, Umask 08H) Counts the number of remote snoops that
	     have requested a cache line be set to the I state from E state.
	     Requires writing MSR 301H with mask = 2H

     GQ_SNOOP.GOTO_I_HIT_F
	     (Event 0CH, Umask 08H) Counts the number of remote snoops that
	     have requested a cache line be set to the I state from F (for‐
	     ward) state.  Requires writing MSR 301H with mask = 8H

     GQ_SNOOP.GOTO_I_HIT_M
	     (Event 0CH, Umask 08H) Counts the number of remote snoops that
	     have requested a cache line be set to the I state from M state.
	     Requires writing MSR 301H with mask = 1H

     GQ_SNOOP.GOTO_I_HIT_S
	     (Event 0CH, Umask 08H) Counts the number of remote snoops that
	     have requested a cache line be set to the I state from S state.
	     Requires writing MSR 301H with mask = 4H

     QHL_REQUESTS.IOH_READS
	     (Event 20H, Umask 01H) Counts number of Quickpath Home Logic read
	     requests from the IOH.

     QHL_REQUESTS.IOH_WRITES
	     (Event 20H, Umask 02H) Counts number of Quickpath Home Logic
	     write requests from the IOH.

     QHL_REQUESTS.REMOTE_READS
	     (Event 20H, Umask 04H) Counts number of Quickpath Home Logic read
	     requests from a remote socket.

     QHL_REQUESTS.REMOTE_WRITES
	     (Event 20H, Umask 08H) Counts number of Quickpath Home Logic
	     write requests from a remote socket.

     QHL_REQUESTS.LOCAL_READS
	     (Event 20H, Umask 10H) Counts number of Quickpath Home Logic read
	     requests from the local socket.

     QHL_REQUESTS.LOCAL_WRITES
	     (Event 20H, Umask 20H) Counts number of Quickpath Home Logic
	     write requests from the local socket.

     QHL_CYCLES_FULL.IOH
	     (Event 21H, Umask 01H) Counts uclk cycles all entries in the
	     Quickpath Home Logic IOH are full.

     QHL_CYCLES_FULL.REMOTE
	     (Event 21H, Umask 02H) Counts uclk cycles all entries in the
	     Quickpath Home Logic remote tracker are full.

     QHL_CYCLES_FULL.LOCAL
	     (Event 21H, Umask 04H) Counts uclk cycles all entries in the
	     Quickpath Home Logic local tracker are full.

     QHL_CYCLES_NOT_EMPTY.IOH
	     (Event 22H, Umask 01H) Counts uclk cycles all entries in the
	     Quickpath Home Logic IOH is busy.

     QHL_CYCLES_NOT_EMPTY.REMOTE
	     (Event 22H, Umask 02H) Counts uclk cycles all entries in the
	     Quickpath Home Logic remote tracker is busy.

     QHL_CYCLES_NOT_EMPTY.LOCAL
	     (Event 22H, Umask 04H) Counts uclk cycles all entries in the
	     Quickpath Home Logic local tracker is busy.

     QHL_OCCUPANCY.IOH
	     (Event 23H, Umask 01H) QHL IOH tracker allocate to deallocate
	     read occupancy.

     QHL_OCCUPANCY.REMOTE
	     (Event 23H, Umask 02H) QHL remote tracker allocate to deallocate
	     read occupancy.

     QHL_OCCUPANCY.LOCAL
	     (Event 23H, Umask 04H) QHL local tracker allocate to deallocate
	     read occupancy.

     QHL_ADDRESS_CONFLICTS.2WAY
	     (Event 24H, Umask 02H) Counts number of QHL Active Address Table
	     (AAT) entries that saw a max of 2 conflicts. The AAT is a struc‐
	     ture that tracks requests that are in conflict.  The requests
	     themselves are in the home tracker entries. The count is reported
	     when an AAT entry deallocates.

     QHL_ADDRESS_CONFLICTS.3WAY
	     (Event 24H, Umask 04H) Counts number of QHL Active Address Table
	     (AAT) entries that saw a max of 3 conflicts. The AAT is a struc‐
	     ture that tracks requests that are in conflict.  The requests
	     themselves are in the home tracker entries. The count is reported
	     when an AAT entry deallocates.

     QHL_CONFLICT_CYCLES.IOH
	     (Event 25H, Umask 01H) Counts cycles the Quickpath Home Logic IOH
	     Tracker contains two or more requests with an address conflict. A
	     max of 3 requests can be in conflict.

     QHL_CONFLICT_CYCLES.REMOTE
	     (Event 25H, Umask 02H) Counts cycles the Quickpath Home Logic
	     Remote Tracker contains two or more requests with an address con‐
	     flict. A max of 3 requests can be in conflict.

     QHL_CONFLICT_CYCLES.LOCAL
	     (Event 25H, Umask 04H) Counts cycles the Quickpath Home Logic
	     Local Tracker contains two or more requests with an address con‐
	     flict. A max of 3 requests can be in conflict.

     QHL_TO_QMC_BYPASS
	     (Event 26H, Umask 01H) Counts number or requests to the Quickpath
	     Memory Controller that bypass the Quickpath Home Logic. All local
	     accesses can be bypassed. For remote requests, only read requests
	     can be bypassed.

     QMC_ISOC_FULL.READ.CH0
	     (Event 28H, Umask 01H) Counts cycles all the entries in the DRAM
	     channel 0 high priority queue are occupied with isochronous read
	     requests.

     QMC_ISOC_FULL.READ.CH1
	     (Event 28H, Umask 02H) Counts cycles all the entries in the DRAM
	     channel 1high priority queue are occupied with isochronous read
	     requests.

     QMC_ISOC_FULL.READ.CH2
	     (Event 28H, Umask 04H) Counts cycles all the entries in the DRAM
	     channel 2 high priority queue are occupied with isochronous read
	     requests.

     QMC_ISOC_FULL.WRITE.CH0
	     (Event 28H, Umask 08H) Counts cycles all the entries in the DRAM
	     channel 0 high priority queue are occupied with isochronous write
	     requests.

     QMC_ISOC_FULL.WRITE.CH1
	     (Event 28H, Umask 10H) Counts cycles all the entries in the DRAM
	     channel 1 high priority queue are occupied with isochronous write
	     requests.

     QMC_ISOC_FULL.WRITE.CH2
	     (Event 28H, Umask 20H) Counts cycles all the entries in the DRAM
	     channel 2 high priority queue are occupied with isochronous write
	     requests.

     QMC_BUSY.READ.CH0
	     (Event 29H, Umask 01H) Counts cycles where Quickpath Memory Con‐
	     troller has at least 1 outstanding read request to DRAM channel
	     0.

     QMC_BUSY.READ.CH1
	     (Event 29H, Umask 02H) Counts cycles where Quickpath Memory Con‐
	     troller has at least 1 outstanding read request to DRAM channel
	     1.

     QMC_BUSY.READ.CH2
	     (Event 29H, Umask 04H) Counts cycles where Quickpath Memory Con‐
	     troller has at least 1 outstanding read request to DRAM channel
	     2.

     QMC_BUSY.WRITE.CH0
	     (Event 29H, Umask 08H) Counts cycles where Quickpath Memory Con‐
	     troller has at least 1 outstanding write request to DRAM channel
	     0.

     QMC_BUSY.WRITE.CH1
	     (Event 29H, Umask 10H) Counts cycles where Quickpath Memory Con‐
	     troller has at least 1 outstanding write request to DRAM channel
	     1.

     QMC_BUSY.WRITE.CH2
	     (Event 29H, Umask 20H) Counts cycles where Quickpath Memory Con‐
	     troller has at least 1 outstanding write request to DRAM channel
	     2.

     QMC_OCCUPANCY.CH0
	     (Event 2AH, Umask 01H) IMC channel 0 normal read request occu‐
	     pancy.

     QMC_OCCUPANCY.CH1
	     (Event 2AH, Umask 02H) IMC channel 1 normal read request occu‐
	     pancy.

     QMC_OCCUPANCY.CH2
	     (Event 2AH, Umask 04H) IMC channel 2 normal read request occu‐
	     pancy.

     QMC_OCCUPANCY.ANY
	     (Event 2AH, Umask 07H) Normal read request occupancy for any
	     channel.

     QMC_ISSOC_OCCUPANCY.CH0
	     (Event 2BH, Umask 01H) IMC channel 0 issoc read request occu‐
	     pancy.

     QMC_ISSOC_OCCUPANCY.CH1
	     (Event 2BH, Umask 02H) IMC channel 1 issoc read request occu‐
	     pancy.

     QMC_ISSOC_OCCUPANCY.CH2
	     (Event 2BH, Umask 04H) IMC channel 2 issoc read request occu‐
	     pancy.

     QMC_ISSOC_READS.ANY
	     (Event 2BH, Umask 07H) IMC issoc read request occupancy.

     QMC_NORMAL_READS.CH0
	     (Event 2CH, Umask 01H) Counts the number of Quickpath Memory Con‐
	     troller channel 0 medium and low priority read requests. The QMC
	     channel 0 normal read occupancy divided by this count provides
	     the average QMC channel 0 read latency.

     QMC_NORMAL_READS.CH1
	     (Event 2CH, Umask 02H) Counts the number of Quickpath Memory Con‐
	     troller channel 1 medium and low priority read requests. The QMC
	     channel 1 normal read occupancy divided by this count provides
	     the average QMC channel 1 read latency.

     QMC_NORMAL_READS.CH2
	     (Event 2CH, Umask 04H) Counts the number of Quickpath Memory Con‐
	     troller channel 2 medium and low priority read requests. The QMC
	     channel 2 normal read occupancy divided by this count provides
	     the average QMC channel 2 read latency.

     QMC_NORMAL_READS.ANY
	     (Event 2CH, Umask 07H) Counts the number of Quickpath Memory Con‐
	     troller medium and low priority read requests. The QMC normal
	     read occupancy divided by this count provides the average QMC
	     read latency.

     QMC_HIGH_PRIORITY_READS.CH0
	     (Event 2DH, Umask 01H) Counts the number of Quickpath Memory Con‐
	     troller channel 0 high priority isochronous read requests.

     QMC_HIGH_PRIORITY_READS.CH1
	     (Event 2DH, Umask 02H) Counts the number of Quickpath Memory Con‐
	     troller channel 1 high priority isochronous read requests.

     QMC_HIGH_PRIORITY_READS.CH2
	     (Event 2DH, Umask 04H) Counts the number of Quickpath Memory Con‐
	     troller channel 2 high priority isochronous read requests.

     QMC_HIGH_PRIORITY_READS.ANY
	     (Event 2DH, Umask 07H) Counts the number of Quickpath Memory Con‐
	     troller high priority isochronous read requests.

     QMC_CRITICAL_PRIORITY_READS.CH0
	     (Event 2EH, Umask 01H) Counts the number of Quickpath Memory Con‐
	     troller channel 0 critical priority isochronous read requests.

     QMC_CRITICAL_PRIORITY_READS.CH1
	     (Event 2EH, Umask 02H) Counts the number of Quickpath Memory Con‐
	     troller channel 1 critical priority isochronous read requests.

     QMC_CRITICAL_PRIORITY_READS.CH2
	     (Event 2EH, Umask 04H) Counts the number of Quickpath Memory Con‐
	     troller channel 2 critical priority isochronous read requests.

     QMC_CRITICAL_PRIORITY_READS.ANY
	     (Event 2EH, Umask 07H) Counts the number of Quickpath Memory Con‐
	     troller critical priority isochronous read requests.

     QMC_WRITES.FULL.CH0
	     (Event 2FH, Umask 01H) Counts number of full cache line writes to
	     DRAM channel 0.

     QMC_WRITES.FULL.CH1
	     (Event 2FH, Umask 02H) Counts number of full cache line writes to
	     DRAM channel 1.

     QMC_WRITES.FULL.CH2
	     (Event 2FH, Umask 04H) Counts number of full cache line writes to
	     DRAM channel 2.

     QMC_WRITES.FULL.ANY
	     (Event 2FH, Umask 07H) Counts number of full cache line writes to
	     DRAM.

     QMC_WRITES.PARTIAL.CH0
	     (Event 2FH, Umask 08H) Counts number of partial cache line writes
	     to DRAM channel 0.

     QMC_WRITES.PARTIAL.CH1
	     (Event 2FH, Umask 10H) Counts number of partial cache line writes
	     to DRAM channel 1.

     QMC_WRITES.PARTIAL.CH2
	     (Event 2FH, Umask 20H) Counts number of partial cache line writes
	     to DRAM channel 2.

     QMC_WRITES.PARTIAL.ANY
	     (Event 2FH, Umask 38H) Counts number of partial cache line writes
	     to DRAM.

     QMC_CANCEL.CH0
	     (Event 30H, Umask 01H) Counts number of DRAM channel 0 cancel
	     requests.

     QMC_CANCEL.CH1
	     (Event 30H, Umask 02H) Counts number of DRAM channel 1 cancel
	     requests.

     QMC_CANCEL.CH2
	     (Event 30H, Umask 04H) Counts number of DRAM channel 2 cancel
	     requests.

     QMC_CANCEL.ANY
	     (Event 30H, Umask 07H) Counts number of DRAM cancel requests.

     QMC_PRIORITY_UPDATES.CH0
	     (Event 31H, Umask 01H) Counts number of DRAM channel 0 priority
	     updates. A priority update occurs when an ISOC high or critical
	     request is received by the QHL and there is a matching request
	     with normal priority that has already been issued to the QMC. In
	     this instance, the QHL will send a priority update to QMC to
	     expedite the request.

     QMC_PRIORITY_UPDATES.CH1
	     (Event 31H, Umask 02H) Counts number of DRAM channel 1 priority
	     updates. A priority update occurs when an ISOC high or critical
	     request is received by the QHL and there is a matching request
	     with normal priority that has already been issued to the QMC. In
	     this instance, the QHL will send a priority update to QMC to
	     expedite the request.

     QMC_PRIORITY_UPDATES.CH2
	     (Event 31H, Umask 04H) Counts number of DRAM channel 2 priority
	     updates. A priority update occurs when an ISOC high or critical
	     request is received by the QHL and there is a matching request
	     with normal priority that has already been issued to the QMC. In
	     this instance, the QHL will send a priority update to QMC to
	     expedite the request.

     QMC_PRIORITY_UPDATES.ANY
	     (Event 31H, Umask 07H) Counts number of DRAM priority updates. A
	     priority update occurs when an ISOC high or critical request is
	     received by the QHL and there is a matching request with normal
	     priority that has already been issued to the QMC. In this
	     instance, the QHL will send a priority update to QMC to expedite
	     the request.

     IMC_RETRY.CH0
	     (Event 32H, Umask 01H) Counts number of IMC DRAM channel 0
	     retries. DRAM retry only occurs when configured in RAS mode.

     IMC_RETRY.CH1
	     (Event 32H, Umask 02H) Counts number of IMC DRAM channel 1
	     retries. DRAM retry only occurs when configured in RAS mode.

     IMC_RETRY.CH2
	     (Event 32H, Umask 04H) Counts number of IMC DRAM channel 2
	     retries. DRAM retry only occurs when configured in RAS mode.

     IMC_RETRY.ANY
	     (Event 32H, Umask 07H) Counts number of IMC DRAM retries from any
	     channel. DRAM retry only occurs when configured in RAS mode.

     QHL_FRC_ACK_CNFLTS.IOH
	     (Event 33H, Umask 01H) Counts number of Force Acknowledge Con‐
	     flict messages sent by the Quickpath Home Logic to the IOH.

     QHL_FRC_ACK_CNFLTS.REMOTE
	     (Event 33H, Umask 02H) Counts number of Force Acknowledge Con‐
	     flict messages sent by the Quickpath Home Logic to the remote
	     home.

     QHL_FRC_ACK_CNFLTS.LOCAL
	     (Event 33H, Umask 04H) Counts number of Force Acknowledge Con‐
	     flict messages sent by the Quickpath Home Logic to the local
	     home.

     QHL_FRC_ACK_CNFLTS.ANY
	     (Event 33H, Umask 07H) Counts number of Force Acknowledge Con‐
	     flict messages sent by the Quickpath Home Logic.

     QHL_SLEEPS.IOH_ORDER
	     (Event 34H, Umask 01H) Counts number of occurrences a request was
	     put to sleep due to IOH ordering (write after read) conflicts.
	     While in the sleep state, the request is not eligible to be
	     scheduled to the QMC.

     QHL_SLEEPS.REMOTE_ORDER
	     (Event 34H, Umask 02H) Counts number of occurrences a request was
	     put to sleep due to remote socket ordering (write after read)
	     conflicts. While in the sleep state, the request is not eligible
	     to be scheduled to the QMC.

     QHL_SLEEPS.LOCAL_ORDER
	     (Event 34H, Umask 04H) Counts number of occurrences a request was
	     put to sleep due to local socket ordering (write after read) con‐
	     flicts. While in the sleep state, the request is not eligible to
	     be scheduled to the QMC.

     QHL_SLEEPS.IOH_CONFLICT
	     (Event 34H, Umask 08H) Counts number of occurrences a request was
	     put to sleep due to IOH address conflicts. While in the sleep
	     state, the request is not eligible to be scheduled to the QMC.

     QHL_SLEEPS.REMOTE_CONFLICT
	     (Event 34H, Umask 10H) Counts number of occurrences a request was
	     put to sleep due to remote socket address conflicts. While in the
	     sleep state, the request is not eligible to be scheduled to the
	     QMC.

     QHL_SLEEPS.LOCAL_CONFLICT
	     (Event 34H, Umask 20H) Counts number of occurrences a request was
	     put to sleep due to local socket address conflicts. While in the
	     sleep state, the request is not eligible to be scheduled to the
	     QMC.

     ADDR_OPCODE_MATCH.IOH
	     (Event 35H, Umask 01H) Counts number of requests from the IOH,
	     address/opcode of request is qualified by mask value written to
	     MSR 396H. The following mask values are supported: 0: NONE
	     40000000_00000000H:RSPFWDI 40001A00_00000000H:RSPFWDS
	     40001D00_00000000H:RSPIWB Match opcode/address by writing MSR
	     396H with mask supported mask value.

     ADDR_OPCODE_MATCH.REMOTE
	     (Event 35H, Umask 02H) Counts number of requests from the remote
	     socket, address/opcode of request is qualified by mask value
	     written to MSR 396H. The following mask values are supported: 0:
	     NONE 40000000_00000000H:RSPFWDI 40001A00_00000000H:RSPFWDS
	     40001D00_00000000H:RSPIWB Match opcode/address by writing MSR
	     396H with mask supported mask value.

     ADDR_OPCODE_MATCH.LOCAL
	     (Event 35H, Umask 04H) Counts number of requests from the local
	     socket, address/opcode of request is qualified by mask value
	     written to MSR 396H. The following mask values are supported: 0:
	     NONE 40000000_00000000H:RSPFWDI 40001A00_00000000H:RSPFWDS
	     40001D00_00000000H:RSPIWB Match opcode/address by writing MSR
	     396H with mask supported mask value.

     QPI_TX_STALLED_SINGLE_FLIT.HOME.LINK_0
	     (Event 40H, Umask 01H) Counts cycles the Quickpath outbound link
	     0 HOME virtual channel is stalled due to lack of a VNA and VN0
	     credit. Note that this event does not filter out when a flit
	     would not have been selected for arbitration because another vir‐
	     tual channel is getting arbitrated.

     QPI_TX_STALLED_SINGLE_FLIT.SNOOP.LINK_0
	     (Event 40H, Umask 02H) Counts cycles the Quickpath outbound link
	     0 SNOOP virtual channel is stalled due to lack of a VNA and VN0
	     credit. Note that this event does not filter out when a flit
	     would not have been selected for arbitration because another vir‐
	     tual channel is getting arbitrated.

     QPI_TX_STALLED_SINGLE_FLIT.NDR.LINK_0
	     (Event 40H, Umask 04H) Counts cycles the Quickpath outbound link
	     0 non-data response virtual channel is stalled due to lack of a
	     VNA and VN0 credit. Note that this event does not filter out when
	     a flit would not have been selected for arbitration because
	     another virtual channel is getting arbitrated.

     QPI_TX_STALLED_SINGLE_FLIT.HOME.LINK_1
	     (Event 40H, Umask 08H) Counts cycles the Quickpath outbound link
	     1 HOME virtual channel is stalled due to lack of a VNA and VN0
	     credit. Note that this event does not filter out when a flit
	     would not have been selected for arbitration because another vir‐
	     tual channel is getting arbitrated.

     QPI_TX_STALLED_SINGLE_FLIT.SNOOP.LINK_1
	     (Event 40H, Umask 10H) Counts cycles the Quickpath outbound link
	     1 SNOOP virtual channel is stalled due to lack of a VNA and VN0
	     credit. Note that this event does not filter out when a flit
	     would not have been selected for arbitration because another vir‐
	     tual channel is getting arbitrated.

     QPI_TX_STALLED_SINGLE_FLIT.NDR.LINK_1
	     (Event 40H, Umask 20H) Counts cycles the Quickpath outbound link
	     1 non-data response virtual channel is stalled due to lack of a
	     VNA and VN0 credit. Note that this event does not filter out when
	     a flit would not have been selected for arbitration because
	     another virtual channel is getting arbitrated.

     QPI_TX_STALLED_SINGLE_FLIT.LINK_0
	     (Event 40H, Umask 07H) Counts cycles the Quickpath outbound link
	     0 virtual channels are stalled due to lack of a VNA and VN0
	     credit. Note that this event does not filter out when a flit
	     would not have been selected for arbitration because another vir‐
	     tual channel is getting arbitrated.

     QPI_TX_STALLED_SINGLE_FLIT.LINK_1
	     (Event 40H, Umask 38H) Counts cycles the Quickpath outbound link
	     1 virtual channels are stalled due to lack of a VNA and VN0
	     credit. Note that this event does not filter out when a flit
	     would not have been selected for arbitration because another vir‐
	     tual channel is getting arbitrated.

     QPI_TX_STALLED_MULTI_FLIT.DRS.LINK_0
	     (Event 41H, Umask 01H) Counts cycles the Quickpath outbound link
	     0 Data ResponSe virtual channel is stalled due to lack of VNA and
	     VN0 credits. Note that this event does not filter out when a flit
	     would not have been selected for arbitration because another vir‐
	     tual channel is getting arbitrated.

     QPI_TX_STALLED_MULTI_FLIT.NCB.LINK_0
	     (Event 41H, Umask 02H) Counts cycles the Quickpath outbound link
	     0 Non-Coherent Bypass virtual channel is stalled due to lack of
	     VNA and VN0 credits. Note that this event does not filter out
	     when a flit would not have been selected for arbitration because
	     another virtual channel is getting arbitrated.

     QPI_TX_STALLED_MULTI_FLIT.NCS.LINK_0
	     (Event 41H, Umask 04H) Counts cycles the Quickpath outbound link
	     0 Non-Coherent Standard virtual channel is stalled due to lack of
	     VNA and VN0 credits. Note that this event does not filter out
	     when a flit would not have been selected for arbitration because
	     another virtual channel is getting arbitrated.

     QPI_TX_STALLED_MULTI_FLIT.DRS.LINK_1
	     (Event 41H, Umask 08H) Counts cycles the Quickpath outbound link
	     1 Data ResponSe virtual channel is stalled due to lack of VNA and
	     VN0 credits. Note that this event does not filter out when a flit
	     would not have been selected for arbitration because another vir‐
	     tual channel is getting arbitrated.

     QPI_TX_STALLED_MULTI_FLIT.NCB.LINK_1
	     (Event 41H, Umask 10H) Counts cycles the Quickpath outbound link
	     1 Non-Coherent Bypass virtual channel is stalled due to lack of
	     VNA and VN0 credits. Note that this event does not filter out
	     when a flit would not have been selected for arbitration because
	     another virtual channel is getting arbitrated.

     QPI_TX_STALLED_MULTI_FLIT.NCS.LINK_1
	     (Event 41H, Umask 20H) Counts cycles the Quickpath outbound link
	     1 Non-Coherent Standard virtual channel is stalled due to lack of
	     VNA and VN0 credits. Note that this event does not filter out
	     when a flit would not have been selected for arbitration because
	     another virtual channel is getting arbitrated.

     QPI_TX_STALLED_MULTI_FLIT.LINK_0
	     (Event 41H, Umask 07H) Counts cycles the Quickpath outbound link
	     0 virtual channels are stalled due to lack of VNA and VN0 cred‐
	     its. Note that this event does not filter out when a flit would
	     not have been selected for arbitration because another virtual
	     channel is getting arbitrated.

     QPI_TX_STALLED_MULTI_FLIT.LINK_1
	     (Event 41H, Umask 38H) Counts cycles the Quickpath outbound link
	     1 virtual channels are stalled due to lack of VNA and VN0 cred‐
	     its. Note that this event does not filter out when a flit would
	     not have been selected for arbitration because another virtual
	     channel is getting arbitrated.

     QPI_TX_HEADER.FULL.LINK_0
	     (Event 42H, Umask 01H) Number of cycles that the header buffer in
	     the Quickpath Interface outbound link 0 is full.

     QPI_TX_HEADER.BUSY.LINK_0
	     (Event 42H, Umask 02H) Number of cycles that the header buffer in
	     the Quickpath Interface outbound link 0 is busy.

     QPI_TX_HEADER.FULL.LINK_1
	     (Event 42H, Umask 04H) Number of cycles that the header buffer in
	     the Quickpath Interface outbound link 1 is full.

     QPI_TX_HEADER.BUSY.LINK_1
	     (Event 42H, Umask 08H) Number of cycles that the header buffer in
	     the Quickpath Interface outbound link 1 is busy.

     QPI_RX_NO_PPT_CREDIT.STALLS.LINK_0
	     (Event 43H, Umask 01H) Number of cycles that snoop packets incom‐
	     ing to the Quickpath Interface link 0 are stalled and not sent to
	     the GQ because the GQ Peer Probe Tracker (PPT) does not have any
	     available entries.

     QPI_RX_NO_PPT_CREDIT.STALLS.LINK_1
	     (Event 43H, Umask 02H) Number of cycles that snoop packets incom‐
	     ing to the Quickpath Interface link 1 are stalled and not sent to
	     the GQ because the GQ Peer Probe Tracker (PPT) does not have any
	     available entries.

     DRAM_OPEN.CH0
	     (Event 60H, Umask 01H) Counts number of DRAM Channel 0 open com‐
	     mands issued either for read or write. To read or write data, the
	     referenced DRAM page must first be opened.

     DRAM_OPEN.CH1
	     (Event 60H, Umask 02H) Counts number of DRAM Channel 1 open com‐
	     mands issued either for read or write. To read or write data, the
	     referenced DRAM page must first be opened.

     DRAM_OPEN.CH2
	     (Event 60H, Umask 04H) Counts number of DRAM Channel 2 open com‐
	     mands issued either for read or write. To read or write data, the
	     referenced DRAM page must first be opened.

     DRAM_PAGE_CLOSE.CH0
	     (Event 61H, Umask 01H) DRAM channel 0 command issued to CLOSE a
	     page due to page idle timer expiration. Closing a page is done by
	     issuing a precharge.

     DRAM_PAGE_CLOSE.CH1
	     (Event 61H, Umask 02H) DRAM channel 1 command issued to CLOSE a
	     page due to page idle timer expiration. Closing a page is done by
	     issuing a precharge.

     DRAM_PAGE_CLOSE.CH2
	     (Event 61H, Umask 04H) DRAM channel 2 command issued to CLOSE a
	     page due to page idle timer expiration. Closing a page is done by
	     issuing a precharge.

     DRAM_PAGE_MISS.CH0
	     (Event 62H, Umask 01H) Counts the number of precharges (PRE) that
	     were issued to DRAM channel 0 because there was a page miss. A
	     page miss refers to a situation in which a page is currently open
	     and another page from the same bank needs to be opened. The new
	     page experiences a page miss. Closing of the old page is done by
	     issuing a precharge.

     DRAM_PAGE_MISS.CH1
	     (Event 62H, Umask 02H) Counts the number of precharges (PRE) that
	     were issued to DRAM channel 1 because there was a page miss. A
	     page miss refers to a situation in which a page is currently open
	     and another page from the same bank needs to be opened. The new
	     page experiences a page miss. Closing of the old page is done by
	     issuing a precharge.

     DRAM_PAGE_MISS.CH2
	     (Event 62H, Umask 04H) Counts the number of precharges (PRE) that
	     were issued to DRAM channel 2 because there was a page miss. A
	     page miss refers to a situation in which a page is currently open
	     and another page from the same bank needs to be opened. The new
	     page experiences a page miss. Closing of the old page is done by
	     issuing a precharge.

     DRAM_READ_CAS.CH0
	     (Event 63H, Umask 01H) Counts the number of times a read CAS com‐
	     mand was issued on DRAM channel 0.

     DRAM_READ_CAS.AUTOPRE_CH0
	     (Event 63H, Umask 02H) Counts the number of times a read CAS com‐
	     mand was issued on DRAM channel 0 where the command issued used
	     the auto-precharge (auto page close) mode.

     DRAM_READ_CAS.CH1
	     (Event 63H, Umask 04H) Counts the number of times a read CAS com‐
	     mand was issued on DRAM channel 1.

     DRAM_READ_CAS.AUTOPRE_CH1
	     (Event 63H, Umask 08H) Counts the number of times a read CAS com‐
	     mand was issued on DRAM channel 1 where the command issued used
	     the auto-precharge (auto page close) mode.

     DRAM_READ_CAS.CH2
	     (Event 63H, Umask 10H) Counts the number of times a read CAS com‐
	     mand was issued on DRAM channel 2.

     DRAM_READ_CAS.AUTOPRE_CH2
	     (Event 63H, Umask 20H) Counts the number of times a read CAS com‐
	     mand was issued on DRAM channel 2 where the command issued used
	     the auto-precharge (auto page close) mode.

     DRAM_WRITE_CAS.CH0
	     (Event 64H, Umask 01H) Counts the number of times a write CAS
	     command was issued on DRAM channel 0.

     DRAM_WRITE_CAS.AUTOPRE_CH0
	     (Event 64H, Umask 02H) Counts the number of times a write CAS
	     command was issued on DRAM channel 0 where the command issued
	     used the auto-precharge (auto page close) mode.

     DRAM_WRITE_CAS.CH1
	     (Event 64H, Umask 04H) Counts the number of times a write CAS
	     command was issued on DRAM channel 1.

     DRAM_WRITE_CAS.AUTOPRE_CH1
	     (Event 64H, Umask 08H) Counts the number of times a write CAS
	     command was issued on DRAM channel 1 where the command issued
	     used the auto-precharge (auto page close) mode.

     DRAM_WRITE_CAS.CH2
	     (Event 64H, Umask 10H) Counts the number of times a write CAS
	     command was issued on DRAM channel 2.

     DRAM_WRITE_CAS.AUTOPRE_CH2
	     (Event 64H, Umask 20H) Counts the number of times a write CAS
	     command was issued on DRAM channel 2 where the command issued
	     used the auto-precharge (auto page close) mode.

     DRAM_REFRESH.CH0
	     (Event 65H, Umask 01H) Counts number of DRAM channel 0 refresh
	     commands. DRAM loses data content over time. In order to keep
	     correct data content, the data values have to be refreshed peri‐
	     odically.

     DRAM_REFRESH.CH1
	     (Event 65H, Umask 02H) Counts number of DRAM channel 1 refresh
	     commands. DRAM loses data content over time. In order to keep
	     correct data content, the data values have to be refreshed peri‐
	     odically.

     DRAM_REFRESH.CH2
	     (Event 65H, Umask 04H) Counts number of DRAM channel 2 refresh
	     commands. DRAM loses data content over time. In order to keep
	     correct data content, the data values have to be refreshed peri‐
	     odically.

     DRAM_PRE_ALL.CH0
	     (Event 66H, Umask 01H) Counts number of DRAM Channel 0 precharge-
	     all (PREALL) commands that close all open pages in a rank. PREALL
	     is issued when the DRAM needs to be refreshed or needs to go into
	     a power down mode.

     DRAM_PRE_ALL.CH1
	     (Event 66H, Umask 02H) Counts number of DRAM Channel 1 precharge-
	     all (PREALL) commands that close all open pages in a rank. PREALL
	     is issued when the DRAM needs to be refreshed or needs to go into
	     a power down mode.

     DRAM_PRE_ALL.CH2
	     (Event 66H, Umask 04H) Counts number of DRAM Channel 2 precharge-
	     all (PREALL) commands that close all open pages in a rank. PREALL
	     is issued when the DRAM needs to be refreshed or needs to go into
	     a power down mode.

     DRAM_THERMAL_THROTTLED
	     (Event 67H, Umask 01H) Uncore cycles DRAM was throttled due to
	     its temperature being above the thermal throttling threshold.

     THERMAL_THROTTLING_TEMP.CORE_0
	     (Event 80H, Umask 01H) Cycles that the PCU records that core 0 is
	     above the thermal throttling threshold temperature.

     THERMAL_THROTTLING_TEMP.CORE_1
	     (Event 80H, Umask 02H) Cycles that the PCU records that core 1 is
	     above the thermal throttling threshold temperature.

     THERMAL_THROTTLING_TEMP.CORE_2
	     (Event 80H, Umask 04H) Cycles that the PCU records that core 2 is
	     above the thermal throttling threshold temperature.

     THERMAL_THROTTLING_TEMP.CORE_3
	     (Event 80H, Umask 08H) Cycles that the PCU records that core 3 is
	     above the thermal throttling threshold temperature.

     THERMAL_THROTTLED_TEMP.CORE_0
	     (Event 81H, Umask 01H) Cycles that the PCU records that core 0 is
	     in the power throttled state due to cores temperature being above
	     the thermal throttling threshold.

     THERMAL_THROTTLED_TEMP.CORE_1
	     (Event 81H, Umask 02H) Cycles that the PCU records that core 1 is
	     in the power throttled state due to cores temperature being above
	     the thermal throttling threshold.

     THERMAL_THROTTLED_TEMP.CORE_2
	     (Event 81H, Umask 04H) Cycles that the PCU records that core 2 is
	     in the power throttled state due to cores temperature being above
	     the thermal throttling threshold.

     THERMAL_THROTTLED_TEMP.CORE_3
	     (Event 81H, Umask 08H) Cycles that the PCU records that core 3 is
	     in the power throttled state due to cores temperature being above
	     the thermal throttling threshold.

     PROCHOT_ASSERTION
	     (Event 82H, Umask 01H) Number of system assertions of PROCHOT
	     indicating the entire processor has exceeded the thermal limit.

     THERMAL_THROTTLING_PROCHOT.CORE_0
	     (Event 83H, Umask 01H) Cycles that the PCU records that core 0 is
	     a low power state due to the system asserting PROCHOT the entire
	     processor has exceeded the thermal limit.

     THERMAL_THROTTLING_PROCHOT.CORE_1
	     (Event 83H, Umask 02H) Cycles that the PCU records that core 1 is
	     a low power state due to the system asserting PROCHOT the entire
	     processor has exceeded the thermal limit.

     THERMAL_THROTTLING_PROCHOT.CORE_2
	     (Event 83H, Umask 04H) Cycles that the PCU records that core 2 is
	     a low power state due to the system asserting PROCHOT the entire
	     processor has exceeded the thermal limit.

     THERMAL_THROTTLING_PROCHOT.CORE_3
	     (Event 83H, Umask 08H) Cycles that the PCU records that core 3 is
	     a low power state due to the system asserting PROCHOT the entire
	     processor has exceeded the thermal limit.

     TURBO_MODE.CORE_0
	     (Event 84H, Umask 01H) Uncore cycles that core 0 is operating in
	     turbo mode.

     TURBO_MODE.CORE_1
	     (Event 84H, Umask 02H) Uncore cycles that core 1 is operating in
	     turbo mode.

     TURBO_MODE.CORE_2
	     (Event 84H, Umask 04H) Uncore cycles that core 2 is operating in
	     turbo mode.

     TURBO_MODE.CORE_3
	     (Event 84H, Umask 08H) Uncore cycles that core 3 is operating in
	     turbo mode.

     CYCLES_UNHALTED_L3_FLL_ENABLE
	     (Event 85H, Umask 02H) Uncore cycles that at least one core is
	     unhalted and all L3 ways are enabled.

     CYCLES_UNHALTED_L3_FLL_DISABLE
	     (Event 86H, Umask 01H) Uncore cycles that at least one core is
	     unhalted and all L3 ways are disabled.

SEE ALSO
     pmc(3), pmc.atom(3), pmc.core(3), pmc.iaf(3), pmc.ucf(3), pmc.k7(3),
     pmc.k8(3), pmc.p4(3), pmc.p5(3), pmc.p6(3), pmc.corei7(3),
     pmc.corei7uc(3), pmc.westmere(3), pmc.tsc(3), pmc_cpuinfo(3), pmclog(3),
     hwpmc(4)

HISTORY
     The pmc library first appeared in FreeBSD 6.0.

AUTHORS
     The Performance Counters Library (libpmc, -lpmc) library was written by
     Joseph Koshy ⟨jkoshy@FreeBSD.org⟩.

BSD				March 24, 2010				   BSD
[top]

List of man pages available for PC-BSD

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net