ChatGPT (LLMs) replacing programmers? Ridiculous

Discussion in 'Artificial Intelligence' started by Aquarians, Apr 4, 2024.

  1. Dismiss Notice
  1.  
    #21     Apr 4, 2024
  2. 2rosy

    2rosy

    strcpy blast from the past. I thought openbsd fixed that with strlcpy
     
    #22     Apr 4, 2024
    Drawdown Addict likes this.
  3. R1234

    R1234

    I find stack overflow piecemeal and time consuming to sift through.
    I'm actually a decent programmer in C and Fortran. But to my old brain, Python feels like an ad hoc collection of arbitrary constructs thrown together.
     
    #23     Apr 4, 2024
  4. Python is a horrible language, if you are coming from C try C#, you can do anything you want to with C# and it is way faster than Python, with the .net framework ready for web and desktop development.
     
    #24     Apr 4, 2024
  5. Businessman

    Businessman

    Good answer!

    Prompt: "ChatGPT (LLMs) replacing programmers? Ridiculous"

    GPT 3.5 Answer:

    The idea of large language models (LLMs) like ChatGPT replacing programmers entirely is indeed a far-fetched notion and could be considered ridiculous for several reasons:

    1. Complexity of Programming Tasks: While LLMs are powerful in generating human-like text based on prompts, programming involves much more than just writing code. It requires problem-solving, algorithmic thinking, understanding of data structures, debugging skills, and domain-specific knowledge, which are not yet within the capabilities of LLMs.

    2. Understanding Context and Intent: Programming often involves understanding the context of a problem and the intent behind a solution. LLMs may generate code snippets based on given prompts, but they lack the ability to fully comprehend the nuances of a programming task, including user requirements, system constraints, and potential edge cases.

    3. Quality and Reliability: The code generated by LLMs may not always be of high quality or reliable. Programming requires writing efficient, well-structured, and maintainable code, which may not always be achieved solely through text generation by LLMs.

    4. Creativity and Innovation: Programming often requires creativity and innovation to come up with novel solutions to complex problems. While LLMs can assist in generating code based on existing patterns, they may not be capable of true innovation or thinking outside the box.

    5. Human Expertise and Oversight: Even if LLMs were to assist in programming tasks, human expertise and oversight would still be essential. Developers bring a wealth of experience, domain knowledge, and critical thinking to the table, which cannot be replaced by AI systems.
    In summary, while LLMs can be valuable tools for assisting programmers in certain tasks such as code autocompletion, documentation generation, or bug fixing, the idea of them completely replacing programmers is currently unrealistic given the complexity and multidimensional nature of programming tasks.
     
    #25     Apr 4, 2024
  6. RedDuke

    RedDuke

    It will replace tons of programmers of lower level.
     
    #26     Apr 4, 2024
    trismes likes this.
  7. comagnum

    comagnum

    ChatGPT is way to crude/inaccurate to use if you work in high tech. I think the G in Chat GPT stands for guess.

    It's like outsourcing work to a flunky. Maybe years from now it will actually be intelligent instead of being a way overrated search engine.
     
    Last edited: Apr 4, 2024
    #27     Apr 4, 2024
  8. Businessman

    Businessman

    Microsoft clippy on steriods.

     
    Last edited: Apr 4, 2024
    #28     Apr 4, 2024
    Aquarians likes this.
  9. LOL, good one :)

    Unless Skynet spontaneously emerges from the combination of Nvidia chipsets, nothing of substance will change. Right now AI is as much conscious, intelligent and capable of autonomous operation as the Mechanical Turk was in the 19th century. Fascinating technology for the day but useless without a real human to back it.
     
    #29     Apr 4, 2024
    comagnum likes this.
  10. Specterx

    Specterx

    What the vast majority don’t understand is that the LLM “hallucinations”, going off the reservation in response to prompts, seemingly elementary errors in output etc. are not small quirks which will get worked out of the model as it “learns”, but rather totally unavoidable results stemming from the system’s basic design.

    Neither more compute nor more training data is capable of eliminating these problems - the latter especially, since more and more of the ingested content is itself botshit. Therefore, LLMs will remain unsuitable for all but a handful of tasks, and end up being a huge failure from an economic/business POV.
     
    #30     Apr 4, 2024
    engineering and Aquarians like this.