Out of interest I tried running my Primes benchmark [1] on both the x86_64 and x86 Alpine and the riscv64 Buildroot, both in Chrome on M1 Mac Mini. Both are 2nd run so that all needed code is already cached locally.
x86_64:
localhost:~# time gcc -O primes.c -o primes
real 0m 3.18s
user 0m 1.30s
sys 0m 1.47s
localhost:~# time ./primes
Starting run
3713160 primes found in 456995 ms
245 bytes of code in countPrimes()
real 7m 37.97s
user 7m 36.98s
sys 0m 0.00s
localhost:~# uname -a
Linux localhost 6.19.3 #17 PREEMPT_DYNAMIC Mon Mar 9 17:12:35 CET 2026 x86_64 Linux
x86 (i.e. 32 bit):
localhost:~# time gcc -O primes.c -o primes
real 0m 2.08s
user 0m 1.43s
sys 0m 0.64s
localhost:~# time ./primes
Starting run
3713160 primes found in 348424 ms
301 bytes of code in countPrimes()
real 5m 48.46s
user 5m 37.55s
sys 0m 10.86s
localhost:~# uname -a
Linux localhost 4.12.0-rc6-g48ec1f0-dirty #21 Fri Aug 4 21:02:28 CEST 2017 i586 Linux
riscv64:
[root@localhost ~]# time gcc -O primes.c -o primes
real 0m 2.08s
user 0m 1.13s
sys 0m 0.93s
[root@localhost ~]# time ./primes
Starting run
3713160 primes found in 180893 ms
216 bytes of code in countPrimes()
real 3m 0.90s
user 3m 0.89s
sys 0m 0.00s
[root@localhost ~]# uname -a
Linux localhost 4.15.0-00049-ga3b1e7a-dirty #11 Thu Nov 8 20:30:26 CET 2018 riscv64 GNU/Linux
Conclusion: as seen also in QEMU (also started by Bellard!), RISC-V is a *lot* easier to emulate than x86. If you're building code specifically to run in emulation, use RISC-V: builds faster, smaller code, runs faster.
Note: quite different gcc versions, with x86_64 being 15.2.0, x86 9.3.0, and riscv64 7.3.0.
Interesting to see the gcc version gap between the targets. The x86_64 image shipping gcc 15.2.0 vs 7.3.0 on riscv64 makes the performance comparison less apples-to-apples than it looks - newer gcc versions have significantly better optimization passes, especially for register allocation.
MIPS (the arch of which RISCV is mostly a copy) is even easier to emulate, unlike RV it does not scatter immediate bits al over the instruction word, making it easier for an emulator to get immediates. If you need emulated perf, MIPS is the easiest of all
Making insider/true expert information public more quickly in the form of influencing prices in a toy market is THE ENTIRE POINT of prediction markets.
It might've been the original purpose but in practice prediction markets have turned into a tool for gambling.
It also creates weird incentives. If I want to pay a politician to do something, bribing them would generally be illegal. But what if I instead bet lots of money that they won't do it?
This may be the purpose of a prediction market for an outside observer. But the outside observer and the US government (or any org that holds private information) have different purposes - it's an adversarial mechanism.
In particular: the government is free to just publish any insider/true information that it wants the public to know about. If it shared that purpose then the market wouldn't need to exist.
True experts need not be the people with the ultimate ability to effect change. Professional sports organizations ban their players from betting on games because it creates bad incentives to throw a winnable game. Banning elected representatives from gambling on prediction markets doesn’t make it impossible for insider information to surface, but it does prevent the governance equivalent of match fixing.
> Making insider/true expert information public more quickly in the form of influencing prices in a toy market is THE ENTIRE POINT of prediction markets
American taxpayers pay a lot of money for a military and intelligence advantage. It's not clear it's in our interest for that knowledge to be made "public more quickly."
What we don’t want, and what we should enforce, is participants in prediction markets influencing the events they’re betting on (like the recent basketball betting scandal).
Same. Been rocking Sonoma on my M1 Mac for years at this point and it’s been great. There’s been almost zero upsides to upgrading MacOS versions lately.
It's not all that slow as a concept at that time when RAM speeds were as fast as CPU speeds. I think it's just that TI's implementation of the concept in that particular cost-optimised home computer was pretty bad -- the actual registers were in 256 bytes of fast static RAM, but the rest of the system memory (both ROM and RAM) was accessed very inefficiently, not only 1 bytes at a time on a 16 bit machine, but also with something like 4 wait states for every byte.
The 6502 is not very different with a very small number of registers and Zero Page being used for most of what a modern machine would use registers for. For example (unlike the Z80) there is no register-to-register add or subtract or compare -- you can only add/sub/cmp/and/or/xor a memory location to the accumulator. Also, pointers can only be done using a pair of adjacent Zero Page locations.
As long as you were using data in those in-RAM registers the TI-99/4 was around four times faster than a 1 MHz 6502 for 16 bit arithmetic -- and with a single 2-byte instruction doing what needed 7 instructions and 13 bytes of code on 6502 -- and it was also twice as fast on 8 bit arithmetic.
It was just the cheap-ass main memory (and I/O) implementation that crippled it.
> Instead of running large runtimes locally, it acts as a lightweight agent client and delegates reasoning to cloud LLM APIs (GLM/GPT/Claude), while keeping orchestration local.
I thought that's what OpenClaw already is -- it can use a local LLM if you have one, but doesn't have to. If it's intrinsically heavy that's only because it's Javascript running in node.js.
I tried the summit of Mt Ruapehu here in NZ and got 358.8 km to Mt Owen. Not bad as I was expecting Tapuae-o-Uenuku which is a little shorter at 342 km.
One advantage in NZ is that on a nice day you actually have a good chance of seeing it.
Oh ... clicking on Mt Owen doesn't return the favour ... or the other nearest peaks. But Culliford Hill does show a return back to Ruapehu, 355.4 km. Clicking on Tapuae-o-Uenuku also, as expected, gives a line to Ruapehu: 342.3km.
Mt Cook is high, but has too many other high peaks near it.
Mt Taranaki is isolated, but doesn't turn up any very long distances.
I don't expect any other candidates in NZ.
Update: actual and accidental photo of Tapuae-o-Uenuku from Ruapehu (342 km), seven months ago.
And, as pointed out in a comment, also Mount Alarm 2.5 km further.
What is the longest in North America? Or Europe proper -- not Elbrus (which I've not been to but have been close enough to see, from several places e.g. from a house in Lermontov (~94 km only), summit of Beshtau (93 km), Dombai ski field (~63 km), somewhere on A157 (~50km).
Wow, glad you had fun exploring. It suddenly made me think of a little feature that I'm not sure we made the best job of exposing. In the little trophy icon toggle on the right, there's the Top Ten list of views, then under those there's a little line that just says "In current viewport: 123km". Did you see that? Did it make sense? I implemented it, so of course I know that it's better than clicking all the points around a peak to find the longest view from a mountain summit. But maybe it's not obvious to other users? What I do is zoom in so that the viewport only contains the area of the summit (or indeed entire country for that matter) that I'm interested in, then I look at that "In current viewport:" line without having to click anything.
That gives a longest in NZ of 365.3 km from Ruapehu, skirting past close by Tapuae-o-Uenuku (in the Inland Kaikoura Range) to a point on the Seaward Kaikoura Range near the peak of Manakau. Clicking on the actual Manakau peak also gives 365.3 km back to Ruapehu.
I can't seem to find a peak to get a reverse path back to Mt Ranier. Everything I try gets stuck in the Olympic Peninsular. (I was there once ... 1998 or so ... a place called Hurricane Ridge IIRC)
One thing to note about finding reverse lines, is that they're not truly mathematically identical because the observer always has a height of 1.65m and the destination is always some point at the surface, therefore 0.0m. It doesn't always make a difference, but it sometimes can.
The thing about the observer height that I always try to remember is that features really close to the observer can make an exponential difference. Like imagine how simply putting hand in front of your eyes can suddenly make the whole world disappear. So in theory, it is possible that a mere change of a few centimeters in the height of the observer could affect a similarly dramatic change in the view.
Not a geologist, but interesting that many of these sites are close to equator. Suppose that's where mountains are higher because tectonic plates are more active?
Not a geologist either but an astronomer. Never heard that tectonic activity has any association with proximity to equator.
Mountains can rise higher near equator because you have the least gravity there. The whole Earth bulges along the equator. But I don't think it's measurable.
While Everest (8849m) is the highest point above Sea Level, Chimborazo (6267m) in Ecuador is further from the centre of the Earth (about 2000 metres further), due to the equatorial bulge. It's very measurable.
Well that's not what the claim and clarification was about. The question was: can a mountain rise higher in the equator as compared to higher latitudes?
It is not about highest point from centre of Earth. That's is related to equatorial bulge but irrelevant to the discussion.
It's also interesting because the radius of curvature is smaller, meaning the distance to the horizon is shorter north south, and a lot of these views are north south. So the increase in mountain height more than overcomes the other effect!
The earth is an oblate spheroid to an approximation. It's not that they're not symmetric, but at the equator the north south axis has higher rates of curvature than anywhere else (but the east west has somewhat lower rates because of the larger circumference due to the bulge).
So that large lines of sight are near the equator on a north south axis (or symmetrically south north) is crazy because the high rates of curvature in that direction at those latitudes should give the shortest distance to the horizon on earth, making those lines of sight even that much more impressive!
I've been using a K3 for a few weeks now. It's quite pleasant, and if I use all 16 cores (8x X100 and 8x A100) then it builds a Linux kernel almost 3x faster than my one year old Milk-V Megrez and almost 5x faster than K1.
Even using just the "AI" A100 cores is faster than the Megrez!
It's also great that it's now faster than a recent high end x86 with a lot of cores running QEMU.
The X100 cores are derived from THead's 2019 OpenC910. The A100 cores are derived from SpacemiT's own X60 cores in their K1/M1 SoC.
Note that the all-cores K3 result is running a distccd on each cluster, which adds quite a bit of overhead compared to a simple `make` on local cores. All the same it shaves 2.5 minutes off. In theory, doing Amdahl calculation on the X100 and A100 times, it might be possible to get close to 11m50s with a more efficient means of using heterogenous cores, but distcc was easy to do.
Or, you could just run independent things (e.g. different builds) on each set of 8 cores.
Or maybe there's a lower overhead way to use distcc, or something else that is set up to distribute work to more than one set of resources.
I've written a small (~40 instructions) statically linked pure asm program [1] that switches the process to the A100 cores [2] then EXECs the rest of the arguments.
So you can just type something like:
ai bash
or
ai gcc -O primes.c -o primes
or
ai make -j8
... and that command (and any children) run safely on the A100 cores instead of the X100 cores.
It would be great if the upstream Linux kernel got official nicely-worked-out support for heterogenous cores -- more and more RISC-V is going to be like this, but Intel would also benefit with e.g. some cores having AVX-512 and some not, or even I recall one Arm (Samsung) SoC with big.LITTLE cores with different cache block sizes.
But in the meantime, this is workable and useful.
[1] so there is no possibility of the dynamic linker, C start code, or libc using the V extension and putting the process into a state dangerous to migrate to the different-VLEN cores.
[2] by getting the PID and writing it to to `/proc/set_ai_thread`
x86_64:
x86 (i.e. 32 bit): riscv64: Conclusion: as seen also in QEMU (also started by Bellard!), RISC-V is a *lot* easier to emulate than x86. If you're building code specifically to run in emulation, use RISC-V: builds faster, smaller code, runs faster.Note: quite different gcc versions, with x86_64 being 15.2.0, x86 9.3.0, and riscv64 7.3.0.
[1] http://hoult..rg/primes.txt
reply