Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is rare to build general purpose computing hardware in lockstep with software. The two worlds are only bridged by the ISA interface.

It'd be interesting to see what interface we'd choose today with multiple decades of hardware and software development.



Considering we're still trapped by the Gentle Tyranny of Call/Return, we'd probably choose the same interfaces...never mind the huge path-dependence on optimising all the parts around those interfaces.

I have a sneaking suspicion that a more dataflow-oriented interface might be useful. The CPUs do dataflow analysis, the compiler does dataflow analysis, and higher levels are also often dataflow, but all communicate using a non-dataflow instruction stream. On the other hand, the only commercial dataflow CPU I am aware of, the NEC 7281 (http://www.merlintec.com/download/nec7281.pdf) was less than successful.


> It'd be interesting to see what interface we'd choose today with multiple decades of hardware and software development

A worse one? Not a couple days ago there were a couple stories about Itanium here on HN (a designed-from-scratch ISA that turned out to be practically worse than the much more ad-hoc x86_64). Even e.g. RISC-V (which is not entirely free of legacy) is practically 1:1 with ancient ISAs like ARM.


ARM is an evolving ISA. AArch64 is quite different from ARMv7, and has nicer security features than the other leading brand, who can barely manage to ship SGX.


It's not like x86 or RISC-V are not "evolving ISAs" either. And personally I would classify SGX and other "security" features as misfeatures, but that's for another day...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: