Why a professor of fascism left the US: ‘The lesson of 1933 is – you get out’

Discussion in 'Politics' started by Frederick Foresight, Jun 17, 2025.

  1. Tuxan

    Tuxan

    510249072_1021382276835943_7023114757964583125_n.jpg
     
    #71     Jun 21, 2025
  2. spy

    spy

    [​IMG]

    I can't sleep Tux.

    Tell me about the time you lived on Proxima Centauri b.

    I love that one! It always does the trick.
     
    #72     Jun 21, 2025
    echopulse likes this.
  3. You don't know what the models were trained on. You don't know the weights therefore it is impossible to state they are not bias.
     
    #73     Jun 21, 2025
  4. Tuxan

    Tuxan

    You are confusing epistemological certainty with practical evaluation. You don’t need the full recipe to know if a dish is burnt, nor the source code of a mind to spot when it’s making coherent arguments.

    You're suggesting that unless we have total transparency, we can't identify bias, but that would make human conversation itself impossible. We don’t know your mental "weights" either, yet we can still evaluate your arguments. There shite.
     
    #74     Jun 21, 2025
  5. Tuxan

    Tuxan


     
    #75     Jun 21, 2025
  6. Tuxan

    Tuxan

    I'm off to bed
     
    #76     Jun 21, 2025
  7. I know that I am giving you ammo but I thought that you would find this interesting.

     
    #77     Jun 21, 2025
  8. Tuxan

    Tuxan

    I'm not sure I'm interpreting that well but probably.

    Honestly I don't think it's majorly about the training data. Many far right-wing users, especially in U.S. object to the moderation layers built into large language models, interpreting them as liberal bias or censorship.

    These safeguards are designed to reduce reputation harm to the owning corporation, "Your AI said this!" stories, preventing other harm like providing info on WMDs you can make in your kitchen, dangerous medical advice, and ensure respectful engagement outside their group "Why can’t I say [slur]?" etc.

    But some see this as suppressing their views, particularly when their rhetorical style involves contrarian, conspiracy, or adversarial framing. What they call bias is often just friction against unsubstantiated or exclusionary claims.

    The issue isn’t that the model “leans left”, it’s that it doesn’t lean into falsehoods, aggression, or tribal shibboleths, which some have grown used to delivering unchallenged.
     
    Last edited: Jun 21, 2025
    #78     Jun 21, 2025
  9. I wasn't trying to make a point wrt training data or bias. I just thought it was interesting the repub political consultants use LLMs more often than dems.
     
    #79     Jun 21, 2025
  10. Tuxan

    Tuxan

    Yes, my point after the first line goes back to last night.

    I'm not surprised tbe right uses LLMs more if they do, but also Grock is built into X, and X is avoided by many left so which LLMs? Or are they using TruthGPT or others?
     
    #80     Jun 21, 2025