How the 'Death' of Journalism Means More Accurate News
More people sharing their expertise and news also means more fake news. How do we fix this?
The internet brought many benefits into our lives, but it also came with new concerns, such as misinformation and fake news. Readers are growing wise to the deceptions out there, and they are increasingly hungry for information that comes as directly from the source as possible. As innovation and technology start to make it easier to connect expert opinions with the public, journalism will die.
In its place, a new breed of business journalism will take shape as tech innovators learn to best collate and coordinate executive thought leadership into easily accessible and digestible news items. Instead of journalists gathering experts to report the news, experts themselves will put out information from their own fields. Journalism will die because we’ll realize that experts are better sources of more accurate information.
With these new innovations come new problems that need to be addressed. Who decides what is true and worth sharing? How do we filter information and find sources we can trust? This isn’t a problem to be solved quickly, and readers need to understand both the risks and the benefits of democratized media.
Is fake news simply inevitable?
The internet’s capacity for information-sharing is a gift, but too much of a good thing quickly becomes a burden. By 2016, 62% of adults were getting their news from social media, and in the same year, Facebook was slammed for not doing more to deter the rates of engagement with fake news stories leading up to the election. Now, the topic is part of the national dialogue: How can we make online information-sharing more accountable without compromising its democracy?
In fact, the democratic nature of the internet is the solution itself, and we’ve already seen this play out. The reason Facebook was held accountable for the spread of misinformation is that the internet provided widespread access to its behavior, knowledge that provoked public outcry. In response, Facebook tightened its regulations, and research suggests that such efforts have successfully deterred fake news engagement since.
The same happened in 2018 when a Google algorithm update punished hundreds of publications, suddenly dropping their search results. Among the update’s quality regulators was a strengthened E-A-T rule, which enforced the expertise, authority and trust of online content, the website distributing it, its structure and status, and the digital platforms that host it. By rewarding the integrity of information, Google thought it could stop the spread of false information, but in the process, it enforced more accurate news.
Integrity in business breeds integrity of information
Google knew its great power came with great responsibility, and its response encouraged others to accept the same role. By leveraging this power to promote businesses against the unethical concerns underlying misinformation, it enforced not only the integrity of information but also the integrity of business. Legitimate sites responded by strengthening their authority and trustworthiness, while click bait sites either had to make serious ethical changes or fail.
With all the information on the internet, you can become an expert in anything with enough time, but the biggest challenge is sorting through it. Google’s update has had such an impact because it controls that sorting. Content producers who didn’t comply were rejecting a more honest and authentic proposal for how to present information — bad for Google and bad for the reader. More reputable sources of information were happy to jump on Google’s integrity bandwagon, and those were the ones that survived.
Information you can verify is information you can trust
Thanks to technology, anyone can become a contributor to the top business, medicine and lifestyle websites. Before Google applied its E-A-T principles in 2018, a contributor could write on any topic, even those in which they had little to no expertise.
Some outlets were also pay-to-play, and companies could reach out to contributors and ask them to write about their new product. The next thing you knew, the article was appearing in a top industry journal. Content with an ulterior motive, like trying to sell a product, gives readers a reason to doubt it. Now, defy trust with ulterior motives, and Google smacks you with a penalty that drops your quality rating and rank. On the other hand, when you give readers greater access to the direct source of the information they’re reading, they know they have facts they can trust.
The issue that arises from Google’s E-A-T rule, Facebook’s deterring of fake news and countless other solutions to the spread of misinformation is the question of where truth comes from. These systems need to have the utmost integrity. They need to be unbiased. They need to present the unfiltered truth despite trends, money or politics.
Google continues to refine its role in information integrity because it knows people want accurate information from authoritative sources with opinions they trust. Sensationalism will always find its way into the news, and the road will be long and challenging as we continue to find ways to weed that out. But this new trend of integrity of information has real, tangible value that people will appreciate and, eventually, prefer — despite being a work in progress.
If a platform is only presenting one side of an argument, is it really the full truth? Who gets to decide what is true and what is worth sharing? Is it up to the platform it is being shared on? The individual reader? The writer of any given piece? There is no algorithm that can fix this issue of finding ultimate truth — it is derived from an even larger societal ethics issue. This is a problem that will not be solved overnight, but by democratizing news and media, we are taking steps in the right direction.