Benchmark scores can indeed reflect the performance of a processor, but performance is a broad concept. You can use benchmarks to rank a processor’s performance in specific scenarios, which helps determine whether a processor is strong or weak in that area. So, benchmarks do show performance.
However, benchmarks are typically not universally applicable. They cannot be generalized, especially for specialized or highly optimized benchmarks, which often lack broader relevance.
For instance, Cinebench R23 is a benchmark that tests CPU rendering performance.
Without translation, the M1 chip can achieve a score of 7833, comparable to a desktop i5 10400F. This tells us that in the Cinebench R23 rendering test, the M1 performs at the level of the i5 10400F.
But if you switch to a renderer that the M1 doesn’t natively support, requiring translation, its performance drops significantly. In this case, you can’t say the M1 still offers i5 10400F-level rendering performance. Therefore, the R23 results only apply to R23, and cannot be generalized to other rendering software, as the outcomes would differ substantially. This is a perfect example of why benchmarks can’t always be used universally.
Moreover, there are instances where the M1 can’t run certain benchmarks at all, which doesn’t diminish its value.
The M1 is the first 5nm high-performance SoC with outstanding efficiency and impressive single-core performance. In native macOS environments, it runs some supported software more efficiently than older Intel processors. The M1 also has dedicated units for specific tasks, such as H.264/H.265 and ProRES encoding, making it excel in those areas. This shows that the M1 is extremely powerful in specific scenarios.
Thus, a single test scenario only represents the performance in that particular context. The objective reality doesn’t change based on opinion, although people’s comments may sometimes carry bias or be misunderstood as biased.
For example, someone might point out the i5 10400F’s superior compatibility or mention how many things the M1 can’t run, which is a valid critique.
However, in many discussions, people often use the results from a single benchmark to represent the entire performance of a processor, leading to a back-and-forth where neither side agrees.
This isn’t just true for the Apple vs. x86 debate; it also applies to Intel vs. AMD in the x86 space.
For instance, in Cinebench R15, which is an SSE2.0-based benchmark, the hyper-threaded Zen 1 architecture outperforms the hyper-threaded Skylake architecture at the same clock speed. Therefore, in Cinebench R15, Zen 1 is stronger.
However, in Cinebench R20 or R23, Zen 1 falls to the level of Haswell, while Skylake pulls ahead. Thus, in R20/R23, Skylake is clearly superior to Zen 1.
Both conclusions are valid in their respective contexts, but the overall results differ significantly.
When discussing performance, it’s important not to generalize from a single benchmark because different factors can lead to varied outcomes depending on the specific scenario.
Real-world applications are also a valid measure of performance because they reflect how a product performs in a given context. As long as the results are objective and unaffected by interference, they hold significant value.
If you’re comparing real-world performance with benchmark scores, the deciding factor should be what type of performance you need.
For example, if a beginner asks, “Is processor A better than processor B at task XXX?” The performance in the specific task they’re asking about holds more weight than benchmark scores because that’s what they need. The scenario’s performance carries higher importance, and whichever processor performs better in that scenario should be considered the stronger one.
Even if processor A is better at the task, and B beats A by a wide margin in a benchmark, it doesn’t matter. High benchmark scores won’t help if it’s not relevant to the user’s needs.
On the other hand, if someone asks about a scenario with no specific data, and you only have benchmarks and productivity tests to go on, then both are equally valuable, and neither can be said to hold more importance than the other.
Disclaimer:
- This channel does not make any representations or warranties regarding the availability, accuracy, timeliness, effectiveness, or completeness of any information posted. It hereby disclaims any liability or consequences arising from the use of the information.
- This channel is non-commercial and non-profit. The re-posted content does not signify endorsement of its views or responsibility for its authenticity. It does not intend to constitute any other guidance. This channel is not liable for any inaccuracies or errors in the re-posted or published information, directly or indirectly.
- Some data, materials, text, images, etc., used in this channel are sourced from the internet, and all reposts are duly credited to their sources. If you discover any work that infringes on your intellectual property rights or personal legal interests, please contact us, and we will promptly modify or remove it.