Dear Sydney

Have you seen Google’s “Dear Sydney” Olympic ad featuring a father using Gemini AI to help his young daughter write a fan letter to Sydney McLaughlin-Levrone? It is one of the most disturbing commercials I’ve ever seen. To be clear, I love the idea of a young aspiring athlete inspired by an Olympic athlete. That’s awesome.

But after the exposition, the father decides that rather than coach his daughter to express herself honestly and then assist her by helping her use her own words, he will write the following prompt into Gemini: “Help my daughter write a letter telling Sydney McLaughlin-Levrone how inspiring she is and be sure to mention that my daughter plans on breaking her world record… one day. (She says sorry, not sorry.)”

This is exactly what we do not want anyone to do with AI. Ever.

Parenting 101

A parent’s most important job is to educate their children. The father in the video is not encouraging his daughter to learn to express herself. Instead of guiding her to use her own words and communicate authentically, he is teaching her to rely on AI for this critical human skill.

The idea that the father is so insecure with his own language skills that he believes AI will do a better job, “I’m pretty good with words, but this has to be just right,” makes me sad. Google should be ashamed of this messaging. And just to fuel the flames – why did Google reinforce the stereotype of a minority parent being undereducated and insecure about their communicative skills? Everything about the premise of this commercial makes my blood boil.

Our Reality Is Shaped By Our Language

Our reality is profoundly shaped by our use of language. At its best, language is a form of data compression that attempts to distill complex experiences. This lossy compression inevitably leaves out nuances, leading to different interpretations of the same experiences.

For instance, you or I might describe a color as “red.” But that same color might be labeled “scarlet” by an artist or “dark rose” by an interior designer. Each term, while correct within its context, carries different connotations and associations, shaping the user’s perception and understanding. Thus, language not only conveys information but also frames our reality, highlighting the subjective nature of perception and interpretation. Google would have us believe that this young girl doesn’t need to learn to articulate and describe her reality. This is criminally negligent.

Misleading Promises of AI’s Capabilities

The commercial suggests that a poorly worded prompt, processed by a pattern-matching autocomplete algorithm, can empower an LLM to articulate a person’s feelings better than the person themselves. This portrayal is misleading, as it overestimates AI’s ability to understand and convey the nuances of human emotions and thoughts. While the use of generative AI for this task may appear valuable at first glance, the results lack the emotional depth that comes from personal expression. Give me a heartfelt message over a grammatically correct, AI generated message any day.

I received just such a heartfelt message from a reader years ago. It was a single line email about a blog post I had just written, “Shelly, you’re to [sic] stupid to own a smart phone.” I love this painfully ironic email so much, I have it framed on the wall in my office. It was honest, direct, (and probably accurate).

AI’s Threat to Cultural and Linguistic Diversity

If this approach to communication becomes widespread (and Google is saying it will work hard to make it so), it will lead to a future dominated by homogenized modes of expression – a monocultural future where we see fewer and fewer examples of original human thoughts. As more and more people rely on AI to generate their content, it is easy to imagine a future where the richness of human language and culture erode.

Echoes of “Wall-E”

It’s probably cliché to say the thesis of “Dear Sydney” echoes the dystopian themes of the movie “Wall-E,” where technology leads to a decline in human engagement and self-sufficiency. What really got me going is Google’s apparent belief that technology’s control over essential aspects of our humanity is inevitable. It is not!

Ultimately, We Have a Choice

What are the ethical implications of using AI to simulate human emotions and relationships? Does this fan letter accurately represent the sender’s intentions or is it simply perceived by the sender as a more appropriate (but far less representative) form factor? This is a television commercial, so we have no idea what a young girl in a real world situation would actually write – but this prequel to a dystopian future raises some broad ethical questions.

Suffice it to say, I hate everything about the underlying idea that democratized, homogenized skills are the highest, best use of AI. I don’t want to live in Generica, where every human experience is continually devolving into cookie cutter templates that “work.”

We have a choice. We can be mindful of the dangers of democratized mediocrity and reject it in favor of education, subject matter expertise, and skills amplification.

I flatly reject the future that Google is advertising. I want to live in a culturally diverse world where billions of individuals use AI to amplify their human skills, not in a world where we are used by AI pretending to be human.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.

Get Briefed Every Day!

Subscribe to my daily newsletter featuring current events and the top stories in technology, media, and marketing.

Subscribe