A short story about mass schootings, trolling, conspiracy theories. Augmented reality is used as a defence against trolls to filter out content that might be distressing.
“When you take a photograph,” they asked me, “don’t you trust the camera A.I. to give you the best picture? When you scrub through drone footage, you rely on the A.I. to identify the most interesting clips, to enhance them with the perfect mood filters. This is a million times more powerful.”
I gave them my archive of family memories: photos, videos, scans, drone footage, sound recordings, immersiongrams. I entrusted them with my child.
Searches for Hayley’s name began to trend on porn sites. The content producers, many of them A.I.-driven bot farms, responded with procedurally generated films and VR immersions featuring my niece. The algorithms took publicly available footage of Hayley and wove her face, body, and voice seamlessly into fetish videos.
The new defensive neural networks—marketed as “armor”—observe each user’s emotional state in response to their content stream. Capable of operating in vectors encompassing text, audio, video, and AR/VR, the armor teaches itself to recognize content especially upsetting to the user and screened it out, leaving only a tranquil void. As mixed reality and immersion have become more commonplace, the best way to wear armor is through augmented-reality glasses that filter all sources of visual stimuli. Trolling, like the viruses and worms of old, is a technical problem, and now we have a technical solution.