It's common to have a "CPU utilization" value calculated as an average utilization across all logical CPU threads (as displayed in Windows Task Manager), but this is of limited use to the observer unless they are using applications which scale across many threads. Most applications DON'T scale across many threads, so you can end up with an uncomfortable situation where a 16 thread CPU is delivering maximum possible performance to an application yet the "CPU utilization" only says "6%". Compounding this poor communication is how Windows can rapidly move software threads around to different hardware threads, precluding the observer from pinning one logical thread for monitoring, confusing monitoring application algorithms, and obfuscating the true single-thread CPU load and application usage. As new CPUs are delivered with more cores and threads, this problem is amplified, and common "CPU utilization" monitoring becomes useless, with many tasks not being visible on utilization graphs despite clocking up a CPU to maximum boost and fully using one thread or core. I propose a novel "max CPU single-thread utilization" (MCSTU) metric and accompanying "max CPU single-thread utilization application" metric for monitoring applications, so people can see when their CPU is being used, and by what application. These are complimentary to the "average" CPU and GPU utilization values commonly shown in monitoring applications ever since the days of 1-core CPUs where the average and the max was the same thing. Note the same phenomenon of average equaling max in many-core GPUs where all applications are maximally threaded. Thank you for reading!