Google announces Bard A.I. in response to ChatGPT

Discussion in 'Artificial Intelligence' started by TrAndy2022, Feb 6, 2023.

  1. Dismiss Notice
  1. For a good while before ChatGPT, people everywhere saying that Google Search was now basically just a product Ad search. Every query would mostly produce a related item to sell you. If you asked a question that doesnt bring up a product for sale, it will just default you to Wikipedia.

    So, while Google was dumbing down their beloved Search Engine...others were inventing a whole new world of possibilities. Google rushes out a half-assed attempt at their own AI Interface and craps all over the rug.

    Yea... its true. Google has been far too big for far too long. Right now they look like a bloated bohemoth with a very limited grasp on modern technology. Kinda like Microsoft back when they didnt think the internet was very important, then realized the stupidity and rushed out a copy of netscape. They couldn't charge for it cuz they stole it. Google also looks like the Microsoft that tried to put out a smartphone and it ended up being the worst phone ever offered to the market.

    So here we are with Google. A free browser that depends upon their Search Engine. Free Gmail (I still have an old hotmail account somewhere around), a Smartphone that is just horrible, a fitbit watch and of course the requisite cloud services. Thank goodness Youtube is run fairly separately...but its woke and controlled and fully biased. I'll be honest here, google maps doesnt even work half the time unless you are trying to locate a store nearby. Overseas, Google maps doesnt work well at all. I really wonder why Samsung doesnt insists on being allowed to develop their own store and play button outside of google? App developers hate being constrained to just Google Play and AppleStore and the taking of most of their income from the App sales.

    I remember when Google was talking about low orbit satellites with Greg Wyler long before Musk, Medical robotics (Verily??), Google Glass, wearable informatics, Google X. All we have today is whats become a shit search engine constantly trying to sell you Chinese crap, A shit chrome book, half-assed attempt at buying (defunct HTC) Smartphone company, a fitbit watch and, the ability to track your every move 24/7.

    https://www.forbes.com/sites/ericma...oogle-x-projects-that-failed/?sh=153a4aec7aa1

    Has anyone looked up Astro Teller. wtf..lol Astro's latest project is titled "A Focus on Failure"

    https://www.inspiringtalks.net/motivation-success/astro-teller/


    Lordy Lordy...this stock looks massively over priced. Yes..Google is currently around $1500 when considering recent splits. I really kinda hated to do this piece ,,but really..this company is just begging to be re-priced..just begging
     
    Last edited: Feb 12, 2023
    #31     Feb 12, 2023
  2. d08

    d08

    Google has some cool open source projects like Tensor Flow, they weren't sleeping at all. Microsoft didn't do anything in the ML open source world and wouldn't be surprised if they are using Tensor Flow, Keras and Pytorch in ChatGPT.

    Google owns Android as the OS which is used for most phones in the world.

    Works perfectly well here, it's my go-to navigation app since they supported offline maps. I'm "overseas" and have never used Google Maps in the US.
     
    #32     Feb 12, 2023
  3. easymon1

    easymon1

    Watch for sign of strength after this drop. ymmv.

    delete.png
     
    #34     Feb 12, 2023
  4. SunTrader

    SunTrader

    https://www.theverge.com/2023/2/14/23599007/microsoft-bing-ai-mistakes-demo

    Microsoft’s Bing AI, like Google’s, also made dumb mistakes during first demo
    /
    Bing AI users have found that Microsoft’s chatbot is making a lot of mistakes. It even made financial errors during Microsoft’s first demos.


    By TOM WARREN / @tomwarren

    Feb 14, 2023, 5:58 AM EST|
    Share this story
    [​IMG]

    Google’s AI chatbot isn’t the only one to make factual errors during its first demo. Independent AI researcher Dmitri Brereton has discovered that Microsoft’s first Bing AI demos were full of financial data mistakes.

    Microsoft confidently demonstrated its Bing AI capabilities a week ago, with the search engine taking on tasks like providing pros and cons for top selling pet vacuums, planning a 5-day trip to Mexico City, and comparing data in financial reports. But, Bing failed to differentiate between a corded / cordless vacuum, missed relevant details for the bars it references in Mexico City, and mangled financial data — by far the biggest mistake.

    In one of the demos, Microsoft’s Bing AI attempts to summarize a Q3 2022 financial report for Gap clothing and gets a lot wrong. The Gap report (PDF) mentions that gross margin was 37.4 percent, with adjusted gross margin at 38.7 percent excluding an impairment charge. Bing inaccurately reports the gross margin as 37.4 percent including the adjustment and impairment charges.

    [​IMG]

    Bing then goes on to state Gap had a reported operating margin of 5.9 percent, which doesn’t appear in the financial results. The operating margin was 4.6 percent, or 3.9 percent adjusted and including the impairment charge.

    During Microsoft’s demo, Bing AI then goes on to compare Gap financial data to Lululemon’s same results during the Q3 2022 quarter. Bing makes more mistakes with the Lululemon data, and the result is a comparison riddled with inaccuracies.

    Brereton also highlights an apparent mistake with a query related to the pros and cons of top selling pet vacuums. Bing cites the “Bissell Pet Hair Eraser Handheld Vacuum,” and lists the con of it having a short cord length of 16 feet. “It doesn’t have a cord,” says Brereton. “It’s a portable handheld vacuum.”

    However, a quick Google search (or Bing!) will show there’s clearly a version of this vacuum with 16-foot cord in both a written review and video. There’s also a cordless version, which is linked in the HGTV article that Bing sources. Without knowing the exact URL Bing sourced in Microsoft’s demo, it looks like Bing is using multiple data sources here without listing those sources fully, conflating two versions of a vacuum. The fact that Brereton himself made a small mistake in fact-checking Bing shows the difficulty in assessing the quality of these AI-generated answers.

    Bing’s AI mistakes aren’t limited to just its onstage demos, though. Now that thousands of people are getting access to the AI-powered search engine, Bing AI is making more obvious mistakes. In an exchange posted to Reddit, Bing AI gets super confused and argues that we’re in 2022. “I’m sorry, but today is not 2023. Today is 2022,” says Bing AI. When the Bing user says it’s 2023 on their phone, Bing suggests checking it has the correct settings and ensuring the phone doesn’t have “a virus or a bug that is messing with the date.”

    [​IMG]

    Microsoft is aware of this particular mistake. “We’re expecting that the system may make mistakes during this preview period, and the feedback is critical to help identify where things aren’t working well so we can learn and help the models get better,” says Caitlin Roulston, director of communications at Microsoft, in a statement to The Verge.

    Other Reddit users have found similar mistakes. Bing AI confidently and incorrectly states “Croatia left the EU in 2022,” sourcing itself twice for the data. PCWorld also found that Microsoft’s new Bing AI is teaching people ethnic slurs. Microsoft has now corrected the query that led to racial slurs being listed in Bing’s chat search results.

    “We have put guardrails in place to prevent the promotion of harmful or discriminatory content in accordance to our AI principles,” explains Roulston. “We are currently looking at additional improvements we can make as we continue to learn from the early phases of our launch. We are committed to improving the quality of this experience over time and to making it a helpful and inclusive tool for everyone.”

    Other Bing AI users have also found that the chatbot often refers to itself as Sydney, particularly when users are using prompt injections to try and surface the chatbot’s internal rules. “Sydney refers to an internal code name for a chat experience we were exploring previously,” says Roulston. “We are phasing out the name in preview, but it may still occasionally pop up.”

    Personally, I’ve been using the Bing AI chatbot for a week now and have been impressed with some results and frustrated with other inaccurate answers. Over the weekend I asked it for the latest cinema listings in London’s Leicester Square, and despite using sources for Cineworld and Odeon, it persisted in claiming that Spider-Man: No Way Home and The Matrix Resurrections, both films from 2021, were still being shown. Microsoft has now corrected this mistake, as I see correct listings now that I run the same query today, but the mistake made no sense when it was sourcing data with the correct listings.

    Microsoft clearly has a long way to go until this new Bing AI can confidently and accurately respond to all queries with factual data. We’ve seen similar mistakes from ChatGPT in the past, but Microsoft has integrated this functionality directly into its search engine as a live product that also relies on live data. Microsoft will need to make a lot of adjustments to ensure Bing AI stops confidently making mistakes using this data.
     
    #35     Feb 15, 2023
    Nobert likes this.
  5. some use cases for Chat GTP:
    1) it could be used to provide medical advice or diagnoses basing on user's input (webMD type of functionalities)
    2) it could be used to provide instructions for cooking or car repair, ... (relaying archive information to users).
    3) translation services or dictation or taking notes or data collection.
    4) Alexa type of tasks.
    5) currently, I think the IBM Watson server could beat both ChatGTP and Bing AI in chess and in Jeopardy! both pretenders would need more transistors to get to the human-like intelligence.
     
    #36     Feb 16, 2023
  6. Overnight

    Overnight

    These AI chat things would burn out and blow up if you asked them if Manfred Mann was saying "Blinded by the light, revved up like a deuce, another runner in the night", or "Blinded by the light, revved up like a douche, another runner in the night". Even humans cannot agree on this, so no AI will ever figure it out.

    The AI will never be able to determine if it is deuce or douche, because AI cannot HEAR stuff. :)

    (After years of listening to it, I think it really is "deuce". It took some hi-fi recordings for me to hear it.)
     
    Last edited: Feb 16, 2023
    #37     Feb 16, 2023
  7. tomkat22

    tomkat22

    LOL. I always thought he was saying "Cut loose like a deuce you know they're on her in the night." I wonder if anyone has ever ask Springsteen what the correct lyrics are since he's the one who wrote it.
     
    #38     Feb 17, 2023
  8. ph1l

    ph1l

    This is getting closer to being able to replace humans on social media.:caution:
    https://finance.yahoo.com/news/microsoft-ai-chatbot-threatens-expose-024521663.html
     
    #39     Feb 20, 2023
  9. virtusa

    virtusa

    Did Reddit hack and manipulate GPT?
     
    #40     Feb 23, 2023