Retro Chip.....Tiny RAM: AI's Power Unleashed....if this any bit true.......

Discussion in 'Wall St. News' started by S2007S, May 15, 2025 at 5:01 AM.

  1. S2007S

    S2007S

  2. nitrene

    nitrene

    Get real. You really believe 128 Mb of RAM has usage in modern computing? You can't even load a decent Linux kernel with that little RAM.

    Deepseek was itself a mirage. They were using smuggled H100s from Singapore. You really expect the Chinese to tell you the truth?

    People on ET will believe anything written on the net. Just like everyone was sure the hyperscalers would magically make a better GPU than nVidia, you know the company who created the industry & has 25 year head start on everyone.
     
  3. maxinger

    maxinger

    upload_2025-5-15_19-45-14.jpeg


    Always read such news with a pinch of salt.
    Then you will never be conned / scammed.
    And you could con the conman / scam the scammer.
     
  4. Businessman

    Businessman

    Next version will run on a 1K ZX81
     
  5. S2007S

    S2007S

    So basically more media lies....


    Dang it


    Guess they will have to pay $50,000 a pop for those blackwells
     
  6. Businessman

    Businessman


    Running AI models vs Training models requires a big difference in compute power.

    The article says the LLM they ran had 260,000 parameters. I have no idea how useful such a small LLM would be, so I asked the AI:


    A language model with 250,000 parameters is extremely small by modern standards and would generally not be considered "good" for most real-world NLP tasks.

    Here's why:
    • Modern LLMs like GPT-3, GPT-4, Claude, or LLaMA models typically have billions (and even trillions) of parameters.

    • A model with 250k parameters is more like a toy model—useful for:
      • Educational purposes

      • Testing architectures

      • Running on extremely low-resource environments (e.g., microcontrollers)

      • Very simple or narrow tasks (like basic text classification, rule-based chatbots)