In the spring of 2024, when Rachael Sawyer, a technical writer from Texas, received a LinkedIn message from a recruiter hiring for a vague title of writing analyst, she assumed it would be similar to her previous gigs of content creation. On her first day of work a week later, however, her expectations went bust. Instead of writing words herself, Sawyer’s job was to rate and moderate the content created by artificial intelligence.
The job initially involved a mix of parsing through meeting notes and chats summarized by Google’s Gemini, and, in some cases, reviewing short films made by the AI.
On occasion, she was asked to deal with extreme content, flagging violent and sexually explicit material generated by Gemini for removal, mostly text. Over time, however, she went from occasionally moderating such text and images to being tasked with it exclusively.
“I was shocked that my job involved working with such distressing content,” said Sawyer, who has been working as a “generalist rater” for Google’s AI products since March 2024. “Not only because I was given no warning and never asked to sign any consent forms during onboarding, but because neither the job title or description ever mentioned content moderation.”
Read more | THE GUARDIAN