Google has come up with a set of measures to avoid the barrage of targeted negative reviews on its Maps app. The company's intention is to prevent users in bad faith from using the app's reviews to "cancel" products, services or companies registered on the platform.
- How Google Maps can help in rainy and flooding situations
- How to create a custom map in Google Maps [My Maps]
Google Maps ratings allow a person to demonstrate satisfaction with an establishment — and they can be decisive for others to decide if the place is worth visiting. For this, Google counts on the contribution of its users: no one better than them to assess whether a restaurant, bar or store is worthy of a good note. There are millions of reviews published per day, according to the company.
But big tech also understands that the system can be misused, with the sole objective of harming someone else's business. Against this backdrop, Google is updating its Maps review policies to ensure they “are based on real-world experiences.”
“As As the world evolves, so do our policies and protections. This helps us protect places and companies from infringing and off-topic content when there is a potential for abuse. For example, when governments and companies began requiring proof of a COVID-19 vaccine before entering certain places, we implemented extra safeguards to remove Google reviews that criticize a company for its health and safety policies or for meeting a mandate of vaccine.”Ian Leader, Product Manager in the Google Maps User Generated Content Group
Training for humans and machines
The Google explains on its official blog that any review submitted by Maps will be sent to the platform's moderation system. It is a hybrid — composed of humans and machines.
With artificial intelligence, the company builds its front line. That's because machines are good at identifying patterns. As such, Google claims that “the vast majority of false and fraudulent content is removed before anyone actually sees it.”
Software analyzes identify whether the content is offensive or deviates from the proposed topic, if the responsible account is trustworthy (or has a suspicious history) and if there is any anomaly directed at the establishment itself (such as a flood of reviews in a short time or a media exposure that would motivate fraudulent comments).
Human operators are responsible for performing quality tests and training algorithms to remove potentially biased models. This is an even more delicate job, as it requires a painstaking process that considers numerous contexts of use of a word to avoid undue blocks or that propagate some kind of unwanted bias — it is also something that is constantly evolving.p>Google further explains that after a review is published, systems continue to analyze the content to identify questionable patterns.
The stuff of critical-sense
As we have already published in another article on Tecnoblog, this whole machine learning process is extremely complicated, from a technical point of view — if it is difficult for humans to have an accurate critical sense, imagine trying to calculate something so subjective and transmit that ability to machines.
At the same time, giving up thisfiltering made by software is impracticable given the amount of people who use services such as Google Maps. Therefore, despite the challenge, it is good to see Google actively investing in updates and improvements.
If society is constantly changing, it is important that the rules taught to algorithms also follow this evolution — and understand phenomena related to each season (like cancel culture in this case) — to provide the best user experience.