As AI moves at breakneck speed, publishers gear up for a clash with Google and Microsoft


“Just Bard it”….not as catchy. Last week Google released its ChatGPT rival, Bard, as the chatbot race heats up (#AI-of-the-tiger). But Google expressed caution with the release, warning “things will go wrong,” and hasn’t integrated Bard into its search engine (unlike Microsoft, which launched a new CGPT-fueled Bing and 365 apps). Google’s cautiousness may be warranted: media publishers are gearing up for a showdown with Microsoft, Google, and OpenAI over their bots, The Wall Street Journal reported.

·       Facebook = no longer media’s biggest threat. In 2019, half of Americans got their news from FB, and publishers wanted compensation for lost ad revenue and traffic.

·       Now it’s AI bots. As you’ve probably heard, large language models are trained on a massive amount of text data. That includes copyrighted articles from the web.

·       Media execs are demanding compensation for use of their content in AI-generated responses. CGPT has been known to plagiarize and tweak human writing.

CGPT feels a connection… Last week OpenAI announced that CGPT can now browse the web to pull info from after 2021 (in some cases). That could pose an existential threat for news outlets. Publishing execs have started examining how much their content has been used to “train” bots, and are said to be exploring legal options, led by the publishing trade group News Media Alliance.

·       News Corp. CEO Robert Thomson said, “Clearly, they are using proprietary content — there should be, obviously, some compensation for that.”

·       “Fair use” law allows portions of copyrighted material to be used without permission in certain cases (think: news reporting, scholarly reports).

·       In the past, techies like Facebook and Microsoft have struck deals to pay publishers for news featured on their platforms. While OpenAI has leaned on fair use, it said it has also paid for rights to certain content.

THE TAKEAWAY

Moving faster than your problems can backfire… Rapid-fire AI releases show that tech titans are taking an “ask for forgiveness, not permission” approach. Industries haven’t yet had time to digest issues that could arise (picture: educators scrambling to detect cheating), from bias to misinformation to copyright infringement. But when issues catch up to the innovation, it could lead to a backlog of problems all at once.


Comments