A new report from Data and Society raises doubts about automated solutions to deceptively altered videos, including machine learning-altered videos called deepfakes. Authors Britt Paris and Joan Donovan argue that deepfakes, while new, are part of a long history of media manipulation -- one that requires both a social and a technical fix. Relying on AI could actually make things worse by concentrating more data and power in the hands of private corporations. The Verge reports: As Paris and Donovan see it, deepfakes are unlikely to be fixed by technology alone. "The relationship between media and truth has never been stable," the report reads. In the 1850s when judges began allowing photographic evidence in court, people mistrusted the new technology and preferred witness testimony and written records. By the 1990s, media companies were complicit in misrepresenting events by selectively editing out images from evening broadcasts. In the Gulf War, reporters constructed a conflict between evenly matched opponents by failing to show the starkly uneven death toll between U.S. and Iraqi forces. "These images were real images," the report says. "What was manipulative was how they were contextualized, interpreted, and broadcast around the clock on cable television." Today, deepfakes have taken manipulation even further by allowing people to manipulate videos and images using machine learning, with results that are almost impossible to detect with the human eye. Now, the report says, "anyone with a public social media profile is fair game to be faked." Once the fakes exist, they can go viral on social media in a matter of seconds. [...] Paris worries AI-driven content filters and other technical fixes could cause real harm. "They make things better for some but could make things worse for others," she says. "Designing new technical models creates openings for companies to capture all sorts of images and create a repository of online life."Read more of this story at Slashdot. Click here to read full news..