Hmm: NNUE was introduced in 2018, the AlphaZero preprint 2017, AlphaGo 2015-2016. I checked this because my memory claimed that it was AlphaGo's success that sparked the new level of interest in NN evaluation.
Wouldn't surprise me if AlphaZero's improvements had no influence in that timeline, but for AlphaGo it would.
The original NNUE paper cites AlphaZero[0]. The architectures are different because NNUE is optimized for CPUs and uses integer quantization and a much smaller network. I don't think one could credibly claim that it would have come about if not for Google making so much noise about their neural network efforts in Go, Chess and Shogi.
For whatever it's worth, the NNUE training dataset contains positions from Leela games and several generations of self-play. Stockfish wouldn't be where it is if not for Google's impact. AlphaFold will likely have a similar impact on our understanding of protein structure. I don't know why everyone is so offended by them puffing their chests out a little bit here, the paper's linked in the article.
The 2nd strongest engine, Leela Chess Zero, is indeed directly inspired by AlphaZero, though, and did surpass Stockfish until NNUE was introduced.