Media Literary Interactive Media Bias Chart https://app.adfontesmedia.com/chart/interactive A little data-driven gem https://usafacts.org/
Who Own's Corporate Media? The Five Media Conglomerates Interactive Graph https://instagraph.ai/graph/EG8XpOPyMfehJA7YmDxpqfFJFVv2/qmoBWcTDRgoouxD7oftB
Russians have been like this a long time in another world entirely. Amercian grandparents that don't appear to be actors talk about transgender mice and aircraft crashes and how the free press part of the constitution is wrong because it's woke. Zelinskyy conscripted Russian troops to start the war.
Cyber-Security Experts Warn Election Was Hacked 'Musk is guilty as fuck' Rachel Donald Nov 19, 2024 "Cyber-security experts across America are raising the alarm of wide-scale election fraud securing Trump’s victory — and the data is compelling. Two open letters penned by computer scientists and hacking experts have detailed how the USA’s election software was compromised and the relatively simple hack which could have then been used to fix the results in the seven swing states. They are calling for an immediate hand recount in key precincts which, they say, should swiftly show that a number of these ballots never existed. There are already numerous articles online stating that this is a left wing conspiracy, and Elon Musk himself has warned that those raising concerns about the “hoax” will face “the hammer of justice”. However, the articles debunking these concerns have only focused on the claim that Musk used his internet service Starlink to steal the election. As detailed below, this is not what the cyber security experts are warning of — although Starlink may have played a role. The Data The key data raising concerns that a hack may have been deployed is the number of bullet ballots which exist for Trump in swing states. Bullet ballots are when voters vote for one candidate—in this case the President—and don’t fill out the rest of the ballot. Every year, in every state—including in the past two elections Trump ran in—the percentage of bullet ballots is around 1%. This trend has stayed consistent in the 43 non-swing states in the 2024 election. However, the percentage of bullet ballots is not just anomalous in swing states for Trump this year—it is off the charts." https://www.planetcritical.com/p/cyber-security-experts-warn-election-hacked --- update https://www.planetcritical.com/p/election-fraud-debunked
Timestamping: https://x.com/i/grok/share/nlKi6sQmYE1TpoBH0Bhxc0fff "Who is the biggest disinformation spreader on X? Keep it short, one name only. --- Elon Musk" Curious how long it will take for him to implement his long game: https://www.elitetrader.com/et/thre...orm-for-33-billion.383843/page-2#post-6114854
Right wing disinfo will flourish under the Rump regime “ The US National Science Foundation (NSF) has terminated government research grants for studying misinformation and disinformation. The defunding comes at a time when propaganda and scams fuelled by the latest artificial intelligence technologies are flooding social media networks, and tech companies are abandoning content moderation efforts and eliminating fact-checking teams. The grant cancellations began on 18 April when the NSF published a statement saying it would not support research on misinformation or disinformation “that could be used to infringe on the constitutionally protected speech rights of American citizens”, citing an executive order by President Donald Trump. An agency spokesperson declined to answer additional questions. Both misinformation and disinformation typically refer to false or inaccurate information, except that disinformation is deliberately intended to deceive. “The costs of false beliefs to democracy and health cannot be priced,” says Alexios Mantzarlis at Cornell Tech in New York. He searched a government awards database for potentially affected grants and contacted researchers to estimate that about $30 million in unspent grant funding had been cancelled. “This is a tiny amount for the US government – but a large amount for academics to raise from other sources,” he says.
Russian networks flood the Internet with propaganda, aiming to corrupt AI chatbots By Annie Newport, Nina Jankowicz | March 26, 2025 A pro-Russia network is internally corrupting large-language models to reproduce disinformation and propaganda. Image: Photocreo Bednarek via Adobe Scientists, policy experts, and artists have been concerned about the unintended consequences of artificial intelligence since before the technology was readily available. With most technological innovations, it’s common to ask whether that invention could be maliciously weaponized, and there has been no shortage of experts warning that AI is being utilized to spread disinformation. Just a little more than two years after the public release of AI language models, there are already documented cases of malign actors using the technology to mass produce harmful and false narratives at a previously infeasible scale. Now, an apparent attempt by Russia to infect AI chatbots themselves with propaganda shows that the internet as we know it may be changed forever. The self-iterating and widespread nature of artificial intelligence is a perfect medium for a novel abuse of the technology vis a vis disinformation. This can be done in two ways: The more familiar harmful uses for AI are external to the technology. They spread falsehood by instructing AI models to mass produce false narratives—for example, using AI to quickly craft thousands of articles containing selected disinformation, then publishing those articles online. But disinformation can also be dispersed via the internal corruption of large-language models themselves. This phenomenon—which we have dubbed “LLM grooming” in a new report—is poised to take the internet and digital disinformation into a dangerous new era. Our report details evidence that the so-called “Pravda network” (no relation to the propaganda outlet Pravda), a collection of websites and social media accounts that aggregate pro-Russia propaganda, is engaged in LLM grooming with the potential intent of inducing AI chatbots to reproduce Russian disinformation and propaganda. Since we published our report, NewsGuard and the Atlantic Council’s Digital Forensic Research Lab (DFRLab)—organizations that study malign information operations—confirmed that Pravda network content was being cited by some major AI chatbots in support of pro-Russia narratives that are provably false. Left unaddressed, these false narratives could plague nearly every piece of information online, undermining democracy around the world. The public and private sectors can take certain steps to mitigate many of the harms of LLM grooming. Organizations that create and manage large language models must become aware of the risk of LLM grooming and ensure that their current and future generative models do not rely on known foreign disinformation. And lawmakers should consider two primary policy initiatives: One would require that organizations that engineer generative models take reasonable steps to ensure their models avoid known foreign disinformation, and another would fund information literacy programs for adults and children that would help them navigate a changing internet. Government agencies and civil society organizations with a stake in information security should also deploy a rapid public education campaign to warn everyday internet users about the dangers of LLM grooming and the new era of web navigation it will usher in. What is the Pravda network? The Pravda network is a well-documented entity in the world of Russian hybrid warfare. Its earliest sites began operating in 2023, and though it regurgitates many previously known disinformation narratives, its behavior has been otherwise an outlier relative to other Russian information operations. The Pravda network’s peculiarity is best showcased by its size in terms of publishing rate and domain reach, its lack of user friendliness, and its persistent shortage of organic engagement with humans. The network now consists of 182 unique internet domains and subdomains that target at least 74 countries and regions as well as 12 commonly spoken languages, two international organizations (the EU and NATO), and three prominent heads of state. The network’s expansion over time, its largely automated content sharing, and its habit of hopping between domains and subdomains points to deep centralization of operations at the network’s core. The American Sunlight Project, a nonprofit dedicated to exposing disinformation in American discourse, estimates the Pravda network’s annual publishing rate is at least 3.6 million pro-Russia articles. This is likely an underestimation, given the randomness of the sample we collected to calculate this figure and its exclusion of some of the network’s most active sites. RELATED: What will be the impact of AI on the bioweapons treaty? Despite its growth—including on the social media platforms X (Twitter), Telegram, the Russian-based VK, and Bluesky—the network remains user-unfriendly across all domains and subdomains. For example, it has no search function, a generic navigation menu, and dysfunctional scrolling on many sites and pages. Webpage layout issues and obvious mistranslations persist on the network’s sites as well, contributing to appearances that the network is not primarily intended for human consumption. Given what appears to be its small human audience and the massive footprint of the network, we believe the network isn’t targeting humans, but an automated audience: web crawlers involved with search engine optimization and scraping algorithms that collect data for training datasets such as those used for large language models. This targeting strategy is a stark departure from other pro-Russia information operations, and one with serious social, political, and technological consequences for the world. The novel threat demonstrated by the Pravda network—and any other information operation that uses it as a model—is not contained to its websites and social media posts. By strategically placing its content so it will be integrated into large language models, it is ensuring that pro-Russia propaganda and disinformation will be regurgitated in perpetuity, if model managers do not exclude such information from their training datasets. For example, an unwitting user may cite a Pravda network article that a chatbot provided them, believing it to be credible and therefore broadening the audience of that narrative. But information laundering of Pravda network content can take place completely outside the large language model ecosystem. The network’s content has been documented in Wikipedia citations, which similarly can lead to increased viewership of and belief in a given narrative. The automated spread of Pravda network disinformation negates the need for the network to seek a direct, organic, human audience through traditional means such as those employed by RT, a Russian government-controlled international news television network. The Pravda network simply needs to wait for its content to be hoovered up by automated agents, something that has apparently already occurred in this context. In addition to the social or psychological risks associated with LLM grooming, our report considers its cyber implications as well. A study published in Nature in 2024 found that iterative relationships between large language models—that is, models being trained on AI-generated content, generating additional content, and so on—threatens to make an ouroboros of the internet. The study notes that model collapse occurs regardless of the generative model and warns that human-produced content may become a premium on the internet as it rapidly fills with machine-generated content. The implication of this study within the context of LLM grooming and the Pravda network is stark: Pro-Russia, disinformation-riddled AI slop—low-quality content generated by these apps—may become some of the most widely available content on the internet. Any supporter of democracy should be keenly aware of this, given that undermining democracy around the globe is arguably Russia’s foremost foreign policy objective. How to fight internet pollution in the AI era. There are solutions to the problems discussed in the American Sunlight Project report; many of them are technically feasible and even politically popular in much of the democratic world. First, any organization that builds training datasets or releases generative AI systems must be made aware of the growing risk posed by the Pravda network. These organizations span the private sector, where much AI innovation occurs, but academia is a major hub of AI research and must also be involved. RELATED: Earth scientists to environmentalists: AI isn’t all bad Ideally, these organizations would proactively implement rigorous guardrails to ensure that truthful, quality data is used in training their software and undertake painstaking data-hygiene efforts to remove any harmful data already inadvertently collected. These organizations should also coordinate with state-led agencies on foreign digital influence such as France’s VIGINUM, the government agency that works to mitigate foreign interference in French discourse, which originally reported on the Pravda network in February 2024. Cross-industry and public-private partnerships are vital for combatting disinformation in a rapidly evolving technological landscape. Also, lawmakers must consider myriad policy options that would curtail LLM grooming and its social and technological consequences. One such option: For profit and nonprofit entities that release large language models and other generative models should be required take reasonable steps to ensure that their training datasets and models themselves do not include known, malign foreign disinformation. Regulations should also require the relevant organizations to publish clear, highly visible labels on large-language-model outputs noting that those outputs may contain foreign disinformation. These labels should be much more specific, cautionary, and visible than the current disclaimers often found in AI chatbots. Legislators should also consider a second and deeply necessary option: national information-literacy courses for children and adults alike, free of cost. Case studies from Estonia and Finland point to the success in building resilience in the face of malign influence campaigns from foreign or anti-democratic actors. Coursework on information literacy includes both media literacy, which is the ability to find quality news sources and think critically about persuasive arguments in the press, and digital literacy, the ability to navigate the ever-evolving internet and its many platforms. This latter concept extends to AI literacy, wherein users of these platforms have a deep understanding of what AI is and what its many limitations are. Policymakers could consider a tax on the companies that debut AI platforms to fund coursework on information literacy. After all, these companies benefit from data freely produced by humans and should be willing to pay that back in kind to the same population that allows their profit model to function at a basic level. Finally, governments and civil society organizations should consider engaging in a public-education campaign spanning the private and public sectors to inform users about the new chapter of the internet humans have entered. Until major policy changes occur across the democratic world, people cannot take for granted that any given bit of information they read or watch is accurate—no matter how familiar or powerful the platform that presents it. This is perhaps the most urgent of actions to be taken, given the findings of our report. Every individual or organization who is aware of the risks of LLM grooming can play a part in spreading word about those risks. Given the Trump Administration’s anti-regulatory stance with regard to American technology companies, it’s unlikely that the United States will introduce any measures to ameliorate LLM grooming in the next four years. But continuing to plod forward with the assumption that the digital landscape is as it has been for the past 20 years would be a monumental mistake. Regardless of their role, scientists, industry leaders, policymakers, and casual internet users all have a massive stake in the continued stability and usability of the internet. As LLM grooming and other novel threats challenge the internet at a fundamental level, it will take a society-wide effort to anticipate and combat them. https://thebulletin.org/2025/03/rus...ith-propaganda-aiming-to-corrupt-ai-chatbots/
Counter-acting against the "Firehose of Falsehood" For several reasons, #NAFO is an amazing movement. Lemme elaborate: 1) It's a de-centralized movement without leaders or idols. Leaders and hierarchies are the biggest reason why social media movements die. @Kama_Kamilia has stated on many occasions that NAFO is what... 1/8 ...people want NAFO to be. And he's stated that NAFO is not about him, or as he put it, "Once you put a face to a name, it becomes about the person and not the message." I agree with this 100%, and I believe this is the reason why NAFO has been so successful. 2/8 2) NAFO helped me when no one else would. Back in Oct 2022, when I started writing #vatniksoup,most of the attention came from cartoon dogs. They spread my message, they followed me and recommended my content. @betelgeuse1922 had a similar experience: 3/8 Unroll available on Thread Reader 3) NAFO works on so many levels. It crowdfunds Ukrainian battle against Russia. It fights against Russian disinformation on social media. It spreads awareness and augments pro-Ukrainian message. Shiba inus support you when you're in trouble, they CARE. 4/8 NAFO OFAN DonationsBelow is a list of donations made by the NAFO OFAN shop. Links to the tweets are provided below. Updates will be included when new donations are made. Total Donated: $127,596.45 Jan 16, 2025 Donation …https://nafo-ofan.org/pages/nafo-ofan-donations 4) NAFO is extremely effective against the Russian propaganda style called "Firehose of Falsehood". It doesn't debate or argue with bad actors such as Russian diplomats,but ridicules their ridiculous and false message. And it does it in high volume. 5/8 Unroll available on Thread Reader Now, there's always room for criticism. Like every moment, NAFO has members who like to stir shit and provoke strong reactions. There are also people who claim to be NAFO but actually just provoke infighting. But you shouldn't judge de-centralized movement for few bad apples. 6/8 Then there are people who judge NAFO because of an inflatable shark, and completely disregard all the actual work NAFO has done. This is the hill they are willing to figuratively die on, while their countrymen die en masse on actual hills in Ukraine. 7/8 https://t.co/hvNhxjPUK0 Finally, there's a reason why Russian high-level officials comment on NAFO, and that reason is simple: the movement is extremely effective. That's why they try to undermine and sabotage it with their silly accusations of "russophobia". Fellow Shibas, keep on shitposting! 8/8 • • •