This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2023/06/GettyImages-1248389472-e1686695395579.jpg?w=2048With the emergence of A.I. and proliferation of fake content on social media, it’s growing harder to differentiate real news from misinformation. Microsoft just launched a new media program to combat the problem by attaching a credibility check page to online news via ads. The program is a partnership between Microsoft and the Trust Project, a nonprofit group of news outlets seeking to protect integrity in the media.
Microsoft will now put ads on some online news articles that direct readers to the Trust Project’s “8 Trust Indicators.” The ads will appear on readers’ devices if they use Microsoft products and systems, including Outlook email. The page for the program explains the nonprofit’s eight tenets of credible journalism, and tells the reader what questions to ask about the media they consume before trusting it.
For example, people reading an article should ask, “Do they give details so we can check the sources ourselves?” and “For investigative, in-depth or controversial stories, does the journalist provide sources for each claim?”
Disinformation—which differs from misinformation because it intends to deceive— has inflamed national crises in recent years, including the anti-vaccination movement during the COVID-19 pandemic and election denialism following the 2020 election. Conspiracy theories have also sparked radicalism and political violence.
The media literacy program comes after Microsoft invested $10 billion in buzzy startup Open AI, maker of the A.I. chatbot ChatGPT. The chatbot is used extensively for plagiarism, especially by students, and is known for producing “hallucinations,” or false information. Microsoft has also integrated A.I. into its search engine Bing by adding ChatGPT-like features to the service.
Microsoft’s A.I. ventures may be enabling the spread of misinformation even as its media trust program attempts to combat it. On June 5, European regulators pushed several tech companies including Microsoft, Google, and Facebook-parent Meta to label A.I.-generated content so that people who see it aren’t confused about its source.
At a briefing in Brussels, EU Commission Vice President Vera Jourova said that chatbots are “fresh challenges for the fight against disinformation.” In addition to chatbots, many tech companies now offer A.I. image generation services that can produce realistic but fake visuals, including a fabricated picture of an explosion at the Pentagon that briefly caused a stock market selloff. Some experts predict that 90% of all online content could be A.I. generated in just a few years.
According to the EU, as of May 31 the U.S. and EU are collaborating on a voluntary A.I. code of conduct for industry players to sign.
Microsoft did not immediately respond to a request for comment.