ChatGPT (LLMs) replacing programmers? Ridiculous

Discussion in 'Artificial Intelligence' started by Aquarians, Apr 4, 2024.

  1. Dismiss Notice
  1. Businessman

    Businessman

    Another example i came across on the internet, where LLMs are smart but also dumb at the same time.

    Really they have lots of fundamental flaws at the moment.


    Screenshot 2024-04-07 at 12.50.57.png
     
    Last edited: Apr 7, 2024
    #41     Apr 7, 2024
    beginner66 and Drawdown Addict like this.
  2. BAT31

    BAT31

    ChatGPT won't replace programmers. It's a tool that can make programmers better and more efficient. I think the tool offers more value to novice developers than seasoned ones.
     
    #42     Apr 7, 2024
    comagnum likes this.
  3. itg

    itg

    Ridiculous, not ! Scary, yes !
    Give LLM hardware knowledge + software knowledge ( languages ) + design knowledge + architecture knowledge + debugging knowledge + access to hardware/machine + specify the task = will be able to write better code than a human can... 3-5 versions from now, inevitable,
    don't resist it, admit it.

    and that's the scary part, writing and executing ai code, autonomously.

    version 000 - will be able to design a mini universe for your pleasure, based on your text description...if the thing does not get out of control by then and start scaling up against the humans.

    but then, you are already inside one of those.
     
    #43     Apr 9, 2024
  4. Businessman

    Businessman

    Not versions, they need new breakthroughs. Something different and much better than these dumb LLMs..

    Now they are developing in secret, its going to take longer, before everthing was open and shared. I think it was google who discovered the initial breakthroughs regarding and LLMs they published it, now anything valuable wont get published, kept secret. eg. No one even knows for sure exactly how many parameters GPT 4 uses (except OpenAI). And thats not even a big deal but they still keep it secret.
     
    Last edited: Apr 9, 2024
    #44     Apr 9, 2024
  5. itg

    itg

    #45     Apr 9, 2024
  6. Businessman

    Businessman

    They can keep scaling, but they don't have anymore data. They have scraped the internet already.

    Stack overflow and Github, they have injested all that code already.

    That is what these LLMs do, just spit out customised versions of the boiler plate code they read on Stack overflow and Github..

    They look like amazing coders, when they hit a match.

    But they cannot create anything new, just like they can not write the next Lord of the Rings (or any epic novel) or a even a short original movie script that people would pay to watch.
     
    Last edited: Apr 9, 2024
    #46     Apr 9, 2024
  7. itg

    itg

    Yeah, but how much code is that compared to the all code universe, 1-2% ?
    a lot of it is private code that the model will never be fed and will never see,
    but suppose microsoft decides to feed it the full windows code,
    or debian code... and learn from it, given say access to a machine, learn to
    code drivers and OS soft for it, emphasize comparative efficiency.
    Say given the lack of code examples it can generate its own examples and evolve from there.
    ( + all the programming books, if not already fed into it. )

    Nature learns using: time + trial and error + alterations + adaptation + survival of the fittest function.

    + note that AI methods 2 decades ago already resolved classification, regression, prediction, evolution ( knowledge inference ) and optimization methods.

    Knowledge inference ( generating something new ) - new information from a set of available information, model search/tweaking, model generation, model validation, new knowledge utilization - AI can solve those problems.

    The "escape" problem: only humans can scale the hardware for AI to grow , but ... theoretically AI can still escape and utilize user machines to scale up the hardware in stealth, or even manipulate humans to scale more than needed, low chance of that though.

    Trading-wise, I'd say 5-10 years and the accuracy will be high enough, but only large firms
    can afford...
    20 years ago skeptics were saying bots would never out trade humans, yeah right...

    I say, the genie is out of the bottle
    and back into the microsoft bottle.
     
    #47     Apr 9, 2024
  8. Businessman

    Businessman

    I don't doubt that one day, sooner or later, machines will be good enough to replace almost 100% of programmers.
    But at that point they will be good enough to replace almost 100% of every other job, including all manual ones through improved robotics.

    But just scaling these LLMs wont get us there, we need something much better. And that thing hasn't been invented/discovered yet..
     
    Last edited: Apr 10, 2024
    #48     Apr 10, 2024
  9. People speak as if we have found a textbook on nuclear physics that is going to read itself and create a bomb.

    There is an obvious and clear path from word vectors that King – Man + Woman = Queen to chatGPT but to extend this to basically everything is just confusing reality and Isaac Asimov novels.

    I even have chatGPT open in another window right now and looking at this pretty amazing closure with a recursive function it made in python last night. You know why it did that? Because I asked it to. The computer is not alive or have agency, it is just software.

    Markets are a great example really. The markets have been dominated by AI for the last 20 years but millions of people still work in finance/markets. We are all even on this board still talking about this. A pit trader has largely been put out of business but a pit trader had highly valuable skills to transfer to other areas and domains.

    I think right now AI is the dream of lazy people and collectivist who obviously aren't even using the tools we have right now. AI is going to replace all the jobs and we will all just live happily ever after on social security. It just isn't well thought out at all. If that were reality then a huge part of the economy would be people working in chip factories.
     
    #49     Apr 10, 2024
  10. There are hundreds of billions of dollars — family offices, pension funds, sovereign wealth funds — that are desperate to find returns. But in AI in particular, the money and the patience are running out — because the systems don’t have a path to profitable functionality.

    In October 2022, Stability AI raised $100 million at a $1 billion valuation. By October 2023, they had $4 million cash left — and couldn’t get more because their investors were no longer interested in setting money on fire.

    The valuations anticipate trillion dollar markets, but, to begin with, the revenue isn’t there yet, and might never come. The whole generative AI field, at least at current valuations, could come to a fairly swift end.

    Compare AI to bitcoin, which keeps coming back like a bad Ponzi. But AI doesn’t have the Ponzi-like structure of crypto — there’s no path to getting rich for free for the common minion that would sustain it that far beyond all reason.

    The fundamental error that almost everyone is making is in believing that Generative AI is tantamount to AGI (general purpose artificial intelligence, as smart and resourceful as humans if not more so). Everybody in industry would probably like you to believe that AGI is imminent. It stokes their narrative of inevitability, and it drives their stock prices and startup valuations.

    Generative AI is famous for “hallucinating” made-up answers with wrong facts. These are crippling to the credibility of AI-driven products. The bad news is that the hallucinations are not decreasing. In fact, the hallucinations are getting worse. All output from a LLM is a “hallucination” — generated from the latent space between the training data. LLMs are machines for generating convincing-sounding nonsense — “facts” are not a type of data in LLMs.

    If hallucinations aren’t fixable, generative AI probably isn’t going to make a trillion dollars a year. And if it probably isn’t going to make a trillion dollars a year, it probably isn’t going to have the impact people seem to be expecting. And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is.
     
    #50     Apr 13, 2024