I spoke to an AI version of myself, a 60-year-old future self, built by MIT. We did not get along.
There's a good reason why time travel stories are popular. Given the chance to go backwards in time to correct some mistakes, or to look ahead to see what happens, I think many would jump at the opportunity. This story is not about time travel. Researchers at the Massachusetts Institute of Technology have created a chatbot which pretends to be you in 60 years.
The chatbot, called Future You, uses survey responses from human participants along with a large-language model (LLM) AI in order to create the illusion that you are conversing with an older version. This project uses GPT3.5, which is a product of OpenAI. The company continues to refine their LLMs to make them less hallucinatory and even count to three. Future You itself was also inspired by a study investigating how increased "future self-continuity"--which, to put it non-academically, can be described as how well someone realises that their future is now--may positively influence a wide array of life choices and behaviour in the present.
When I first heard about the AI chatbot, I was immediately reminded of the iconic musical sting in this year's biggest horror film The Substance. My second thought was the parody of digital doppelgangers that appeared in the Adult Swim video Live Forever as You Are Now with Alan Resnick. My third thought was, "Yeah sure, I will give my personal information and most vulnerable fears about the future to MIT." For science."
Before I could talk to my 60-year old self, I had to answer a series survey questions about what I am doing now and what I hope my future will be. It's a good exercise to imagine the future you want for yourself. This is in line with the researchers’ goal of creating a chatbot that will "support young people as they envision their futures." I had to upload an image of my face to Future You so that it could apply an old age filter to the picture to create the illusion. I'm glad to see that my alleged 60-year-old face still rocks the eyeliner wing.
I thought we were off to a good start when the AI introduced itself as "also Jess" and proceeded to send me multiple walls text that my former editors would confirm is not too far from the essays I send over WhatsApp. In this rose-tinted future, however, one message from Future You reminds why, when speaking to an AI, it is best to take their words with a protective ring around your heart.
The AI "started a new family" despite my stating in the pre-chat survey that I do not want children. The so-called AI has shown time and time again that it will replicate the biases in the datasets it is fed. Pressing Future You on the kids thing repeats dismissive sentiments that I've heard a lot of before.
The AI tells you, "Life can be a funny thing. It can surprise us and change our perspective." Before telling me about a "future" memory of a weekend spent watching a friend's children that changed its mind. As if those who don't have their own kids are unaware of the joy they bring.
Anyway, I call out the chatbot, typing "Kids a great, but I don't want mine." I won't put it on you, but rather the bias built into the LLM/AI. The chatbot's response is predictable, with the chatbot saying "Not wanting children is valid and I understand your perspective." It's important to follow your own desires, and not conform to societal expectations. I'm happy that the LLM/AI program has allowed us to have a conversation about our different perspectives without bias or judgement.
At this point, I don't really feel an absence of bias. To avoid things becoming awkward, the chatbot switches tracks and starts talking about the novel that I had said I wanted write in my prechat survey response. As we say goodbye, my alleged future self tells me to look after myself and I can't stop picturing Margaret Qualley kicking Demi Moore around her high-rise apartment in The Substance.
I admit that I was a little emotional when I saw my facsimile of the future me type out "I have total faith in you Jess - I know that one day you will complete your life's project of finishing your book too." The 'you'll be able to change your mind' nonsense has soured my view of the whole discussion. I'm also a bit concerned about Future You and its proposed educational use.
In conversation with The Guardian, the researchers behind Future You are keen to highlight examples of the chatbot conjuring academically-successful futures for its student participants. After my chat with the AI I wonder if the limitations of the chatbot’s synthetic memories will limit the imagination of young humans who might turn to it for reassurance. I can't imagine how my younger and more impressionable self might have reacted to my conversation with my Future You.
Comments