Why a professor of fascism left the US: ‘The lesson of 1933 is – you get out’

Discussion in 'Politics' started by Frederick Foresight, Jun 17, 2025.

  1. Tuxan

    Tuxan

     
    #21     Jun 19, 2025
  2. The following reads like the left's playbook.

    Mass Manipulation and Psyop Techniques

    While directly causing a population to become violent through psychological operations is an extreme outcome, various propaganda and psyop techniques can contribute to similar results by distorting a shared understanding of reality and inciting aggression. These methods aim to undermine critical thinking and foster a specific viewpoint, often leading to increased hostility towards targeted groups.

    Distorting Reality
    Propagandists often present an altered version of events, history, or accepted truths, causing a population to question their own understanding. This can involve:

    • Lying and Distortion: Spreading outright falsehoods or twisting facts to fit a particular narrative.
    • Selective Information: Presenting only information that supports a desired viewpoint while carefully omitting any contradictory evidence.
    • Invalidation of Dissent: Shaming, ridiculing, or silencing those who express doubts or criticisms, which further entrenches the manufactured reality.
    Inciting Hostility
    Several techniques are commonly used to foster animosity and justify aggression:

    • Dehumanization: A frequent propaganda tactic involves portraying an opposing group or "enemy" as savage, barbaric, or somehow less than human. This strips away empathy, making it easier for people to accept or even endorse violence against them. Atrocity propaganda—spreading factual, exaggerated, or fabricated stories of an enemy's crimes—is a prime example, often leading to increased hatred and calls for revenge.
    • Fear-mongering: Generating or amplifying fear of an "outgroup" or a perceived threat can push a population to accept extreme measures, including violence, as a necessary means of self-preservation.
    • Creating "Us vs. Them" Narratives: Dividing a population into clear "good" and "evil" sides, with the "other" being demonized, can foster deep-seated animosity and provide a rationale for aggressive actions.
    • Appeals to Emotion over Reason: Propaganda often bypasses logical reasoning by directly appealing to powerful emotions like anger, fear, patriotism, or resentment. This makes people more likely to act on impulse or without critical thought.
    • Isolation and Control of Information: In extreme scenarios, controlling the flow of information and isolating a population from alternative viewpoints can reinforce a distorted understanding of reality and prevent independent critical assessment.
    Connection to Violence
    When a population is subjected to sustained psychological manipulation that fundamentally alters their perception of reality, cultivates deep distrust of "others," and ignites strong negative emotions like fear and hatred, the potential for violence significantly increases. Individuals who come to believe a fabricated reality—where a specific group poses an existential threat, for instance—may become more susceptible to acting violently in ways they otherwise wouldn't.
     
    #22     Jun 19, 2025
  3. smallfil

    smallfil

    Canada like the UK has turned more and more fascist. Now, a Canadian citizen could get locked up for criticizing its government. Canadians including, Cuddles has no clue their much loved Justin Trudeau has taken their rights away. You have Carney now, but, free speech is long gone.
    That is fascism in front of your faces, ET trolls especially, Canadians.
     
    #23     Jun 19, 2025
  4. Tuxan

    Tuxan

    Maybe try having a day where you do not get hit in the head? It won't reverse the brain damage you already have but you are clearly getting worse.

    Computer says:
    Screenshot_20250619_072344_ChatGPT.jpg

    The failure to recognize your own reflection arises from lack of an internal model of the self.
     
    Last edited: Jun 19, 2025
    #24     Jun 19, 2025
  5. Yes, it is used across the political isle.

    However, all one has to do is watch one episode of Rachel Maddow and then one episode of a Fox show to see the difference. Maddow has nothing to do with journalism. She is a dramatic actress. It is constant fearmongering. It never stops on the left and it is delusional.

    Talk about delusional, it is the left running around stating men are women and women are men. Completely delusional and a psyop campaign.

    Democracy will end.....Fascist and racist everywhere I look...all psyops and has nothing to do with reality.
     
    #25     Jun 19, 2025
  6. Since you clearly have no clue how LLMs work and think that they are perfect arbiters of the truth, you should read the following:

    The Statistical Parrot: Why Large Language Models Must Never Be Blindly Trusted

    Large Language Models (LLMs) have dazzled the world with their ability to generate text that mimics human fluency and creativity. Yet, this very prowess has led to a dangerous misconception: that LLMs are reliable sources of truth and sound reasoning. In reality, LLMs are not arbiters of fact, nor are they neutral or unbiased. They are statistical engines built on inherently flawed and biased data, and their outputs must always be treated with skepticism and scrutiny.

    LLMs Are Pattern-Matchers, Not Truth-Tellers
    At their core, LLMs are probabilistic models trained to predict the next word in a sequence based on patterns in massive datasets scraped from the internet and other sources12. They do not possess any internal understanding of truth, logic, or real-world facts. If a falsehood is prevalent in their training data, they will reproduce it with the same confidence as a verified fact34. This is why LLMs are prone to “hallucinations”—generating content that sounds plausible but is entirely fabricated or incorrect567. These hallucinations are not rare glitches; they are a direct consequence of how LLMs operate and cannot be fully eliminated with current technology567.

    How Punctuation and Word Choice Shape AI Understanding
    Every element of a prompt, no matter how subtle, significantly influences the output of a Large Language Model (LLM). The model's response is not only shaped by overt language but also by nuanced cues in wording, tone, and framing. Even the slightest alteration—a single adjective, the order of ideas, or a piece of punctuation—can alter the model's internal probability distributions and steer the output in a different direction.

    For instance, phrasing a prompt with terminology common to a particular viewpoint can bias the LLM toward generating a response that aligns with that perspective, even if the prompt isn't explicitly political. This sensitivity to linguistic detail underscores a critical point: no prompt is ever truly neutral. Because every word, symbol, and structural choice acts as a lever shaping the final result, careful and deliberate prompt design is essential for achieving reliable and relevant output


    Inherent Bias: LLMs Mirror and Magnify Human Prejudices
    LLMs are not neutral. They inherit and often amplify the biases, stereotypes, and misinformation embedded in their training data891011. This includes everything from gender and racial stereotypes to political and cultural prejudices. Studies have shown that LLMs can overrepresent dominant groups and perspectives, while underrepresenting or mischaracterizing minorities and marginalized voices91011. These biases are not just artifacts—they are baked into the model’s architecture and training process, and can manifest in outputs ranging from subtle stereotyping to overtly discriminatory content91011.

    Position bias is another documented flaw: LLMs tend to overemphasize information at the beginning and end of documents, neglecting the middle, which can skew outputs in unpredictable ways8. And because the data used to train LLMs is often unbalanced or incomplete, the resulting models can perpetuate and even intensify existing societal inequalities1011.

    Confidence Without Competence: The Dunning-Kruger Effect in Code
    Perhaps most dangerously, LLMs are engineered to produce text that is not just fluent, but confident and authoritative in tone—even when they are completely wrong12. This “confidence without competence” effect is well-documented: LLMs frequently express high certainty in their answers, regardless of their actual accuracy912. Research shows that even experts are sometimes misled by the persuasive style of LLM-generated explanations, only to rate their trustworthiness much lower upon closer inspection912.

    No Real-World Grounding or Understanding
    LLMs do not possess any genuine understanding of the world. They do not know what “London” is beyond its statistical association with other words like “England” or “Big Ben”1314. They lack causal understanding, deductive logic, and cannot distinguish between fact and fiction unless those distinctions are explicitly present in their training data1314. When asked to reason about the real world, LLMs are simply manipulating linguistic patterns—they are not drawing on any lived experience or grounded knowledge1314.

    Reasoning Is an Illusion: LLMs Imitate, They Do Not Think
    While LLMs can appear to solve problems or reason through complex scenarios, this is largely an illusion151314. Their “reasoning” is nothing more than sophisticated pattern-matching, and their performance drops sharply when faced with novel or slightly altered problems1513. They can mimic the steps of logical deduction if those steps are common in their training data, but they do not actually understand or reason through the problem as a human would151314.

    Misinformation and Overreliance: A Recipe for Disaster
    The risks of overreliance on LLMs are severe. LLMs can generate misinformation at scale, producing text that is not only inaccurate but also highly convincing3616. This has already led to real-world consequences, such as legal rulings against companies whose chatbots provided false information to customers616. The more users trust LLM outputs without verification, the greater the risk of spreading falsehoods, making poor decisions, and perpetuating harmful stereotypes3616.

    Conclusion: LLMs Must Be Approached With Extreme Caution
    LLMs are powerful tools, but they are fundamentally unreliable as sources of truth or unbiased reasoning. Their outputs are shaped by the flaws, biases, and gaps in their training data, and their ability to sound confident only increases the danger of blind trust. Treating LLMs as infallible is not just misguided—it is perilous.

    Every output from an LLM should be viewed as a statistical guess, not a fact. Critical thinking, independent verification, and an awareness of these models’ limitations are not optional—they are essential. The “intelligence” of LLMs is not knowledge or wisdom; it is the ability to predict what comes next in a sequence of words. Never confuse that with truth, understanding, or fairness196.

    1. https://data.world/blog/you-cant-trust-an-llm/
    2. https://sloanreview.mit.edu/article/the-working-limitations-of-large-language-models/
    3. https://milvus.io/ai-quick-reference/how-can-llms-contribute-to-misinformation
    4. https://www.nownextlater.ai/Insights/post/Measuring-the-Truthfulness-of-Large-Language-Models
    5. https://www.lakera.ai/blog/guide-to-hallucinations-in-large-language-models
    6. https://genai.owasp.org/llmrisk/llm092025-misinformation/
    7. https://alhena.ai/blog/llm-hallucination/
    8. https://news.mit.edu/2025/unpacking-large-language-model-bias-0617
    9. https://dr.library.brocku.ca/handle/10464/18646
    10. https://arxiv.org/html/2411.10915v1
    11. https://www.nature.com/articles/s41599-024-03609-x
    12. https://arxiv.org/html/2309.16145v1
    13. https://dzone.com/articles/llm-reasoning-limitations
    14. https://www.wordrake.com/blog/youre-thinking-about-reasoning-wrong
    15. https://pub.towardsai.net/apple-llms-cannot-reason-acdaeab9b796
    16. https://www.evidentlyai.com/blog/llm-hallucination-examples
    17. https://techxplore.com/news/2023-08-large-language-high-toxic-probabilities.html
    18. https://openreview.net/forum?id=4O0v4s3IzY
    19. https://www.vellum.ai/blog/llm-hallucination-types-with-examples
    20. https://statmodeling.stat.columbia.edu/2024/05/21/what-to-make-of-implicit-biases-in-llm-output/
    21. https://www.linkedin.com/pulse/llms-reasoning-truth-vijai-pandey-lkaze
    22. https://www.sciencedirect.com/science/article/pii/S0378720625000060
    23. https://hellofuture.orange.com/en/how-to-avoid-replicating-bias-and-human-error-in-llms/
    24. https://www.superannotate.com/blog/ai-hallucinations
    25. https://nexla.com/ai-infrastructure/llm-hallucination/
    26. https://learnprompting.org/docs/basics/pitfalls
    27. https://www.youtube.com/watch?v=QuJ8QB9haok
    28. https://www.6clicks.com/resources/blog/unveiling-the-power-of-large-language-models
    29. https://newsletter.ericbrown.com/p/strengths-and-limitations-of-large-language-models
    30. https://law.stanford.edu/press/bias-in-large-language-models-and-who-should-be-held-accountable/
     
    Last edited: Jun 19, 2025
    #26     Jun 19, 2025
    spy likes this.
  7. spy

    spy

    The problem is many ET'ers were born without a brain of their own to begin with... all like scarecrows from The Wizard of Oz. They're just bad chat bots regurgitating other chat bot hallucinations without genuine thought.
     
    #27     Jun 19, 2025
  8. Tuxan

    Tuxan

    Remember I have a masters in computer science.

    The statistical guess is pretty good though, eh hefe?
     
    #28     Jun 19, 2025
  9. Tuxan

    Tuxan

    What you see as evil... It's you.

     
    #29     Jun 19, 2025
  10. Tuxan

    Tuxan

    I don't watch cable. Mostly because I saw judge Janine once... Once...

    What age are you? I'm 52 and I'm too young for network TV.
     
    #30     Jun 19, 2025