Environmentalists like to say “every day is Earth Day” to remind us that the stewardship of our natural resources is a never-ending process.

In that spirit, I offer a friendly reminder that, in the age of the internet, every day is April Fools’ Day. We all have an ongoing responsibility to be aware of the digital deception and trickery all around us.

From COVID-19 denial to the Jan. 6 Capitol Hill insurrection, the past year has shown just how easily online misinformation can lead to real-world consequences. Despite the efforts of platforms like Twitter and Facebook, fabricated news stories, misleading videos, and manipulated images continue to bounce across the web’s echo chambers.

We are now on the brink of an even more dire misinformation apocalypse.

Synthetic content generated by artificial intelligence (including so-called “deepfake” videos) is becoming increasingly more sophisticated and easier to use. Deepfakes are so named because they use advances in machine learning — deep neural networks — to create audio, images and videos of people saying and doing things they never did.

Newsletter signup for email alerts

AI-based software can already write entire articles with only a headline as a prompt, emulate a person’s speech, or generate endless pictures of people who do not exist. The recent release of jaw-dropping deepfake videos of Tom Cruise offered a stark reminder of how realistic these fabrications can be — and how easily such technologies can be misused.

In recent months, a young Japanese woman with a large following on Twitter was exposed to be a 50-year-old man using an easily accessible app to simply swap out his face for that of a young woman. And a woman in Pennsylvania was charged with “cyber harassment of a child” for sending fake videos to her daughter’s cheerleading coaches purporting to show other girls on her team naked, drinking, and smoking.

The potential misuses are vast and could be catastrophic. Imagine a fake video of a world leader announcing an imminent nuclear strike on a rival nation. How long would military leaders be willing to wait to verify authenticity before launching a retaliatory strike?

Another grave concern is that the proliferation of misinformation and synthetic media will give rise to the so-called “liar’s dividend,” where if anything can be fake, then nothing has to be real and anyone can claim that inconvenient truths are lies. Human-rights advocates who rely on video evidence of abuses are concerned that authoritarian governments will take advantage of the liar’s dividend to simply cast off real videos as fake.

The inability to agree on facts poses an existential threat to our society and democracy. This is the landscape awaiting us unless we act.

So, what can be done?

Lawmakers can and must hold digital platforms accountable for the harms that come from content promoted and disseminated through their services. Just as platforms were pressured to reduce copyright violations, they can also be nudged to do better at demoting misinformation.

We need more investment in technologies that identify and track misinformation and deepfakes. While the past few years have seen impressive advances, those working to detect and track misinformation are simply out-gunned and out-financed.

Ultimately, it is up to all of us as consumers of media to be more critical of what we read, hear, and see online. This doesn’t mean casting off real news as fake news. Rather, it’s about becoming better digital citizens and changing our behavior, donning the digital equivalent of a mask to stop the spread of the misinformation virus.

Before sharing anything on digital media, take a breath. Ask whether the story is clickbait designed to manipulate you by triggering your emotional response. Read past the headline and verify the content you are sharing: Extraordinary claims require extraordinary evidence. Use Snopes, PolitiFact, and other fact-checking services. And ask if you might be the fool, whether the calendar reads April 1 or not.

Hany Farid is associate dean and head of school for the University of California, Berkeley School of Information. He also is a senior faculty advisor for the Center for Long-Term Cybersecurity. He wrote this originally for InsideSources.com.