This was going to be an addendum to the above comment, but I took to long trying hunt down old marketing material for Virtual PC, anyway here is the intended addendum.
[edit: Giant addendum starts here, in response to comments in replies, rather than saying variations on the same things over and over again. Nothing preceding this comment has been changed or edited from my original post. As I've said elsewhere, I really wish HN provided edit history on comments]
First off, for people saying I'm talking about containers, I was not considering them at all, I consider them as essentially tangential to the topic virtual machines and virtualization. I haven't looked into modern container infrastructure to really understand the full nuances, my exceptionally vague understanding of the current state of the art is the original full-vm-per-container model docker, et al introduced has been improved somewhat to allow better resource sharing between clients and the host hardware than a full-vm provides but is still using some degree of virtualization provide better security boundaries than just basic chroot containers could ever do. I'm curious about exactly how much of a kernel modern container VMs have based on the comments in this thread, and if I ever have time between my job and long winded HN comments I'll try to look into it - I’d love good references on exactly how kernel level operation is split and shared in modern container implementation .
Anyway, as commonly used virtualization means "code in the VM runs directly of the host CPU”, and has done for more than two decades now. An academic definition of a virtual machine may include emulation, but if you see any company or person talking about supporting virtual machines, virtual hosts, or virtualized X in any medium - press, marketing, article (tech or non-tech press) - you will know that they are not talking about emulation. The reason is simply that the technical characteristics of an emulated machine are so drastically different that any use of emulation has to be explicitly called out. Hence absent any qualifier virtualization means the client code executes directly on the the host hardware. The introduction of hypervisors in the CPU simply meant that more things could be done directly on the host hardware without requiring expensive/slow runtime support from the VM runtime, it did not change the semantics of what “virtual machine” meant vs emulation even at the time CPUs with direct support for virtualization entered the general market.
Back when VMWare first started out a big part of their marketing and performance messaging boiled down to "Virtual Machine != emulation" and that push was pretty much the death knell for a definition of “virtualization” and “virtual machine” including emulation. As that model took off, "hypervisor" was introduced to general industry as the term for the CPU mechanism to support virtualization more efficiently (I'm sure in specialized industries and academia it existed earlier) by allowing _more_ code to run directly, but for the most part there was no change to userspace code in the client machine. Most of the early “hypervisor”/virtualization extensions (I believe on ARM they’re explicitly called the “virtualization extensions”, because virtualization does not mean emulation) were just making it easier for VM runtimes to avoid having to do anything to code running in kernel mode so that that code could be left to run directly on the host CPU as well.
The closest emulation ever got to "virtualization" in non-academic terminology that I recall is arguably "Virtual PC for Mac" (for young folk virtual pc was an x86 emulator for PPC macs that was eventually bought by MS IIRC), which said “virtual pc” in the product name. It did not however use the term virtualization, and was only ever described as explicitly emulation in the tech press, I certainly have no recollection of it ever even being described as a virtual machine even back during its window of relevance. I'd love to find actual marketing material from the era because I'm genuinely curious what it actually said, but the product name seems to have been reused over time so google search results are fairly terrible and my attempts in the wayback machine are also fairly scattershot :-/
But if we look at the context, once apple moved to x86, from Day 1 the Parallels marketing that targeted the rapidly irrelevant "virtual pc for Mac" product talked about using virtualization rather than emulation to get better performance than virtual pc, but the rapid decline in the relevance of PPC meant that talking about not being emulation ceased being relevant because the meaning of a virtual machine in common language is native client running code directly on the host CPU.
So while an academic argument may have existed that virtualization included emulation in the past, the reality is that the meaning of virtualization in any non-academic context since basically the late 90s has been client code runs directly on the host CPU, not via emulation. Given that well established meaning, my statement that virtualization of a non-host-architecture OS is definitionally not possible is a reasonable statement, that is correct in the context of the modern use of the word virtualization (again we’re talking a couple of decades here, not some change in the last few months).
If you really want to argue with this, I want you to ask yourself how you would respond if you had leased a hundred virtualized x86 systems, and then found half of them were running at 10% the speed of the rest because they were actually emulated hardware, and then if you think that a lawyer for that company would be able to successfully argue that the definition of “virtualization include emulation” would pass muster when you could bring in reps from every other provider, and every commercial VM product and none of them involved emulation, and every article published for decades about how [cloud or otherwise] VMs work (none of which mention emulation). If you really think that your response would be “ah you got me”, or that that argument would work in court, then fair play to you, you’re ok with your definition and we’ll have to agree to disagree, but I think the vast majority of people in tech would disagree.
[edit: Giant addendum starts here, in response to comments in replies, rather than saying variations on the same things over and over again. Nothing preceding this comment has been changed or edited from my original post. As I've said elsewhere, I really wish HN provided edit history on comments]
First off, for people saying I'm talking about containers, I was not considering them at all, I consider them as essentially tangential to the topic virtual machines and virtualization. I haven't looked into modern container infrastructure to really understand the full nuances, my exceptionally vague understanding of the current state of the art is the original full-vm-per-container model docker, et al introduced has been improved somewhat to allow better resource sharing between clients and the host hardware than a full-vm provides but is still using some degree of virtualization provide better security boundaries than just basic chroot containers could ever do. I'm curious about exactly how much of a kernel modern container VMs have based on the comments in this thread, and if I ever have time between my job and long winded HN comments I'll try to look into it - I’d love good references on exactly how kernel level operation is split and shared in modern container implementation .
Anyway, as commonly used virtualization means "code in the VM runs directly of the host CPU”, and has done for more than two decades now. An academic definition of a virtual machine may include emulation, but if you see any company or person talking about supporting virtual machines, virtual hosts, or virtualized X in any medium - press, marketing, article (tech or non-tech press) - you will know that they are not talking about emulation. The reason is simply that the technical characteristics of an emulated machine are so drastically different that any use of emulation has to be explicitly called out. Hence absent any qualifier virtualization means the client code executes directly on the the host hardware. The introduction of hypervisors in the CPU simply meant that more things could be done directly on the host hardware without requiring expensive/slow runtime support from the VM runtime, it did not change the semantics of what “virtual machine” meant vs emulation even at the time CPUs with direct support for virtualization entered the general market.
Back when VMWare first started out a big part of their marketing and performance messaging boiled down to "Virtual Machine != emulation" and that push was pretty much the death knell for a definition of “virtualization” and “virtual machine” including emulation. As that model took off, "hypervisor" was introduced to general industry as the term for the CPU mechanism to support virtualization more efficiently (I'm sure in specialized industries and academia it existed earlier) by allowing _more_ code to run directly, but for the most part there was no change to userspace code in the client machine. Most of the early “hypervisor”/virtualization extensions (I believe on ARM they’re explicitly called the “virtualization extensions”, because virtualization does not mean emulation) were just making it easier for VM runtimes to avoid having to do anything to code running in kernel mode so that that code could be left to run directly on the host CPU as well.
The closest emulation ever got to "virtualization" in non-academic terminology that I recall is arguably "Virtual PC for Mac" (for young folk virtual pc was an x86 emulator for PPC macs that was eventually bought by MS IIRC), which said “virtual pc” in the product name. It did not however use the term virtualization, and was only ever described as explicitly emulation in the tech press, I certainly have no recollection of it ever even being described as a virtual machine even back during its window of relevance. I'd love to find actual marketing material from the era because I'm genuinely curious what it actually said, but the product name seems to have been reused over time so google search results are fairly terrible and my attempts in the wayback machine are also fairly scattershot :-/
But if we look at the context, once apple moved to x86, from Day 1 the Parallels marketing that targeted the rapidly irrelevant "virtual pc for Mac" product talked about using virtualization rather than emulation to get better performance than virtual pc, but the rapid decline in the relevance of PPC meant that talking about not being emulation ceased being relevant because the meaning of a virtual machine in common language is native client running code directly on the host CPU.
So while an academic argument may have existed that virtualization included emulation in the past, the reality is that the meaning of virtualization in any non-academic context since basically the late 90s has been client code runs directly on the host CPU, not via emulation. Given that well established meaning, my statement that virtualization of a non-host-architecture OS is definitionally not possible is a reasonable statement, that is correct in the context of the modern use of the word virtualization (again we’re talking a couple of decades here, not some change in the last few months).
If you really want to argue with this, I want you to ask yourself how you would respond if you had leased a hundred virtualized x86 systems, and then found half of them were running at 10% the speed of the rest because they were actually emulated hardware, and then if you think that a lawyer for that company would be able to successfully argue that the definition of “virtualization include emulation” would pass muster when you could bring in reps from every other provider, and every commercial VM product and none of them involved emulation, and every article published for decades about how [cloud or otherwise] VMs work (none of which mention emulation). If you really think that your response would be “ah you got me”, or that that argument would work in court, then fair play to you, you’re ok with your definition and we’ll have to agree to disagree, but I think the vast majority of people in tech would disagree.