Like spoken language, written language comes in many different shapes and expressions. Through Artificial Intelligence (AI) tools like Google Lens, we are able to navigate across different written languages. However there is always an inherent bias within this technology due to initial human input. So, can the translate app detect the nuances of written language? What does it mean when I can translate some things but not others? Does this change and limit the ways we are able to express ourselves through written language? And finally, as AI and Augmented Reality (AR) technology becomes more commonplace in our daily live, how would these biases affect the decisions we make and the experiences we have?
Exploring the extremes, tagging (graffiti) is a form of written expression typically using stylized lettering. Often associated with gangsters, anarchy, vandalism, and in some places considered by law a punishable criminal activity, it is also regarded as self expression and art. What happens when Google Lens is used to translate tags in Barcelona? Are these words meant to be readable or cary any meaning? Or does it exiist to be seen? Visibility before readability. I can assume I won't get a direct translation of words from this experiment, but what will I get out of it?
After some time of playing around with this tool, I realized that the lens misunderstands other architecture of the public space as numbers and letters.
It translated some windows on a building, patterns on the ground, and on graphics. Most commonly, the number "1" or the letter "l" or "i" and "o" or "0"s.
I also tried experimented with filming myself through the app as I was writing (badly with my non-dominant hand). And using my hand to interrupt the AR projections and move them around on the screen.