That's interesting.
If you were using all 32 threads (8 times the Pi), some basic math (30m * 8) = 4hrs. Indicating that per thread, the Pi is about 75% the speed of the core on the x86. Not bad for the $.
Clearly, that's very approximate and depends greatly on caching, and how effective the hyperthreading is vs actual cores. But still, not a bad effort and shows that actually, irrespective of the instruction set, silicon is silicon and per area probably gets approximately the same performance.
If you were using all 32 threads (8 times the Pi), some basic math (30m * 8) = 4hrs. Indicating that per thread, the Pi is about 75% the speed of the core on the x86. Not bad for the $.
Clearly, that's very approximate and depends greatly on caching, and how effective the hyperthreading is vs actual cores. But still, not a bad effort and shows that actually, irrespective of the instruction set, silicon is silicon and per area probably gets approximately the same performance.
Statistics: Posted by jamesh — Sun Jul 13, 2025 9:00 am