Automated and AI-generated content in reviews must be classified as spam according to Google’s latest product reviews policies.

Advertisment

Highlight

Advertisment
  1. Google has updated its product rating policies, introducing guidelines for AI-generated content in reviews.
  2. Implementation of the new policy involves a combination of automated and human evaluation using machine learning algorithms and specially trained experts.
  3. Violating these policies may result in content disapproval, warnings, or account suspension.

Google Updates Product Ratings Policies On Automated AI Content


Google has updated its product classification policies related to artificial intelligence (AI) and automated content effective August 28, 2023.

My thanks to Duane Forrester, who discovered the policy update and shared it on Twitter.

Automated and AI-generated reviews

This add-on clarifies that reviews generated by a bot or AI application are not allowed and should be marked as spam.

Advertisement

Automated content: We don’t allow reviews that are primarily generated by a bot or AI application. If you identify such content, it should be marked as spam in your feed using the <is_spam> attribute.

Implement updated policies

Google employs automatic and manual evaluation techniques to make sure compliance.

Machine learning algorithms will support this effort while specially trained experts handle complex cases that require context.

Actions against violations may include rejecting offending content or reviews, issuing warnings, or suspending accounts for repeated or serious violations.

If any image is flagged for violating the policy, the associated review content will also be blocked.

Additional guidance for product classification

Our current product rating policies aim to maintain the authenticity, legality and ethics of reviews on our platform.

The policies highlight Google’s stance against spam, urging users to mark irrelevant, repetitive, or meaningless text as spam using the <is_spam> attribute.

Policies also prohibit reviews of dangerous products or procedures and regulated products that may be harmful or widely illegal.

To protect reviewers, the policy prohibits sharing personal and confidential information, phone numbers, email addresses, or URLs in review content.

To ensure a clean and respectful review environment, Google’s policies do not permit the use of obscene, vulgar, or abusive language, violent or defamatory content, or personal attacks.

The rules clearly state that reviews that arise from a conflict of interest or contain unqualified feedback are not to be submitted.

This includes reviews that were paid for, written by an employee, or written by people with a personal interest in the product.

Illegal content, including illegal links or those containing malware, viruses or malware, is strictly prohibited.

In accordance with Google’s security standards, the policy strictly prohibits explicit sexual content in reviews, and promises immediate removal and, in the case of minors, reporting to law enforcement and missing and exploited children.Report National Center.

Reviews that violate copyrights or trademarks or engage in plagiarism also violate the Guidelines.

Google’s policy strongly opposes hate speech, cross-promotion of unrelated products or websites, off-topic reviews, impersonation, and duplicate content.

Regarding language, reviews must be submitted in the original language, and Google provides translation options for users.

Maintain originality

Google’s updated policies highlight the growing need for real, human-generated content that can be easily distinguished from automated, AI-generated content.

This emphasizes the role of humans in review and rating systems and limits the usefulness of AI-generated reviews, which can affect how some companies advertise their products.

This commitment helps ensure the reliability of the information and reviews in Google search results, which is essential for online businesses and consumer trust.

Advertisment