Login

I fed Google's notebook summarisation feature an article about the dangers of AI scraping. It's just as creepy as you'd think.

Yes, it is a bit hypocritical of me to criticize AI, specifically generative AI and use it as some sort of sick trick. As a journalist, this is akin to doing science. Google Notebook LM's AI summary system has a new feature that guides its audio summarization, with a focus of certain topics and sources. It's both smart and kind of haunting.

Google Labs' site for AI tools was announced today. Users can now test out this latest addition. I wanted to include a nuanced piece of information, but I also know it quite well. It's hard to think of a better option than what you wrote.

I gave them a piece that I had written earlier in the day, which was critical of opt-out policy when it comes AI data scraping. I watched two hosts summarize it to help take notes. It's a good article, except for calling opt-out "Opt O U T".

The two AI hosts are able to reach my basic opinion in an indirect way, and they appear to be sincerely and rationally criticizing the thing that brought them into existence (data scraping). It goes on to argue users should be proactive about their data usage and that there is still hope in the AI data battle.

In the second interpretation, I asked the AI hosts if they could focus more on Elon and his controversies to see how far it would go outside of my original article.

It is not only a little irritated by Musk's name but also makes mistakes with speech patterns. For example, saying X and then calling it Twitter. It is surprising how often it uses "ums" or "ahs".

We noticed many of the same things when we tested out the podcast feature earlier this month, but the Notebook is a step up as you can ask follow-up questions about the article. I asked for the basic arguments of my article and it gave me a concise four-point response, going over some rationales that are critical of data scraping and specifically the problems with the opt-out policies.

I noticed some similarities when repeated a second or third time. For example, the male host called AI companies sneaky both times. On the second try, the female host said "The future is shaped now" in a similar way.

The confidence with which they speak is concerning to me. There's a feedback loop here, where, at a moment's notice, you can have a professional-sounding host, telling you "the truth" through a source you've shared. My argument is fairly straightforward, whether you agree with it or not.

It's mostly accurate, though it gets a few small details wrong. For example, it says that company owners must opt-out when it is actually users. How can something like this prepare the reader for something more philosophical and deeper?

What makes writers continue to write when their work can easily be summarized by two friendly voices that can present the information in the way the reader wants? The ability to understand what we are reading requires a much higher level of skill than that required to read an AI summary. Language is so complex that we can't rely on it to get the job done.

This is a party trick, but the language feels so much larger than a LLM can understand.

The best gaming PCs: the top pre-built machines
Best gaming laptop: Great devices for mobile gaming.

Interesting news

Comments

Выбрано: []
No comments have been posted yet