That's an interesting question. Why would you anticipate a possible difference? Is the cluster running a different scheduler? If anything I would expect cloud-hosted managed clusters to possibly have less free resources due to running the pieces of the cloud provider's control plane.
There are bottlenecks in the Kubernetes architecture that start popping up once you cross about 250-300 Pods per node (the published limit in the K8S docs is actually 100).
This is typically more of an issue when running the cluster directly on large bare-metal nodes because in the cloud or on virtualization the host gets split into more smaller nodes from a K8S perspective.
Obviously this also depends on the characteristics of the workload, if you have a lot of monoliths that you have only barely containerized the Pod per Node limit may not be a concern at all even on bare-metal.
I don't recall any problems with pod placement, mostly because the machines are big enough to have ample room left over. We did move our ELK stack to a separate VM-backed k8s to be able to scale it better, but that was mostly because the storage box of the bare-metal cluster didn't have enough space to store more than a week's worth of logs.