1) More accurate, compared to hyperbole like e.g. "Bridging the Gap between Human and Machine Translation" we have right there in the title the domain: news.
2) A more impressive result. This result is on an independently set up evaluation framework, compared to Google's which used their own framework.
These researcher appear to have been much clearer about what they're actually claiming, and also used more standard evaluation tools (Appraise) and methodology rather than something haphazardly hacked together.
The outputs of Microsoft Research are really good. At least in my field, it is one of the few places where if they published something you can be sure of being able to reproduce the results using only what is described in the paper, no secret sauce required.
The issue with that evaluation was that machines were much better at not making some trivial mistakes humans don't care about (eg transcribing umm, err, etc), but were more likely to get the meaning wrong. Kudos to MS for doing the error analysis and publishing that info, but I found the reporting of it misleading.
I don't read that claim being made. Microsoft Research has been generally left alone to do good computer science, in which "left alone" has included "by the marketing department"
It is:
1) More accurate, compared to hyperbole like e.g. "Bridging the Gap between Human and Machine Translation" we have right there in the title the domain: news.
2) A more impressive result. This result is on an independently set up evaluation framework, compared to Google's which used their own framework.
Compare further the papers: https://arxiv.org/pdf/1609.08144.pdf https://www.microsoft.com/en-us/research/uploads/prod/2018/0...
These researcher appear to have been much clearer about what they're actually claiming, and also used more standard evaluation tools (Appraise) and methodology rather than something haphazardly hacked together.