In 1964, the U.S. Government established the Automatic Language Processing Advisory Committee (ALPAC) to evaluate the progress of Computational Linguistics and the potentials of Machine Translation (MT). After two years of research, the seven scientists-Committee issues its famous report questioning the results achieved in the field of MT and calling for more research in Computational Linguistics. The recommendations of the Committee led the U.S. Government to reduce seriously its funding.
This was a huge blow for this 10 years old emerging science. Ever since, MT had difficulties getting back on its feet. But MT advocates didn't loose hope and the research continued.
With the remarkable increase in computer performance and the decrease of cost, MT had started gaining great interest in the 80s; the "statistical" approach towards MT came to life. Commercial MT systems invaded the market in early 90s.
Whatever technology or approach MT System is using, consumer's concern is always about the quality of the translation. This quality is judged by "how similar" the result is to Human Translation. The statistical approach (implemented in this site using Google language tools) seems to fulfill more these expectations; as it tries to generate translations based on millions of human-translated bilingual text corpora. This is considered as a "shortcut" towards acceptable translations, but the hope still lies on the linguistic approach (rule-based), where the machine should understand the source text first before trying to generate translations! The perfect machine translation could be the result of a hybrid method relying on human translations but applying the linguistic rules of both source and target languages.