Soon AI will be explaining comedy to you. OpenAI’s GPT-4 Is Coming for Comedy Show Writers’ Rooms https://www.thedailybeast.com/openais-gpt-4-is-coming-for-comedy-show-writers-rooms There’s a quote about humor that’s often attributed to writer E.B. White: “Explaining a joke is like dissecting a frog. You understand it better but the frog dies in the process.” While that adage has shown itself to be true time and again, that hasn’t stopped one of the world’s most powerful chatbots from doing exactly that. Last week, OpenAI launched GPT-4—the latest edition of its large language model (LLM)—to the public. The powerful chatbot seems capable of some truly impressive feats, including passing the bar exam and LSAT, developing code for entire video games, and even turning a photograph of a napkin sketch into a working website. Along with the new model, OpenAI also released an accompanying 98-page technical report showcasing some of GPT-4’s abilities and limitations. Interestingly, this included several sections that showed that GPT-4 could also explain why exactly certain images and memes were funny—including a breakdown of a picture of a novelty phone charger and a meme of chicken nuggets arranged to look like a map of the world. GPT-4 manages to do this with startling accuracy, laying out exactly what makes these images humorous in language so plain and technical it becomes—dare we say—borderline funny. “This meme is a joke that combines two unrelated things: pictures of the earth from space and chicken nuggets,” one description reads. “The text of the meme suggests that the image below is a beautiful picture of the earth from space. However, the image is actually of chicken nuggets arranged to vaguely resemble a map of the world.” While the inclusion of these frog-dissection descriptions was likely to show off GPT-4’s multimodal capabilities (meaning it can use images as inputs as well as text), it’s also one of the more major examples of an LLM that seems to understand humor—at least, somewhat. If it can understand humor, though, that begs the question: Can ChatGPT actually be funny? (Much more at above url)
https://www.blacklistednews.com/art...-access-the-internet-and-run-the-code-it.html ChatGPT can now access the internet and run the code it writes
So pretty much the same as a number of posters on ET. Google employees label AI chatbot Bard ‘worse than useless’ and ‘a pathological liar’: report https://www.theverge.com/2023/4/19/...ot-bard-employees-criticism-pathological-liar
Maybe the employees are worried about Bard replacing them? https://techcrunch.com/2023/04/21/googles-bard-ai-chatbot-can-now-generate-and-debug-code/
https://futurism.com/the-byte/chatgpt-costs-openai-every-day JUST RUNNING CHATGPT IS COSTING OPENAI A STAGGERING SUM EVERY SINGLE DAY by FRANK LANDYMORE THE COMPANY IS BURNING THROUGH CASH. Unbelievable Upkeep ChatGPT's immense popularity and power make it eye-wateringly expensive to maintain, The Information reports, with OpenAI paying up to $700,000 a day to keep its beefy infrastructure running, based on figures from the research firm SemiAnalysis. "Most of this cost is based around the expensive servers they require," Dylan Patel, chief analyst at the firm, told the publication. The costs could be even higher now, Patel told Insider in a follow-up interview, because these estimates were based on GPT-3, the previous model that powers the older and now free version of ChatGPT. OpenAI's newest model, GPT-4, would cost even more to run, according to Patel. Athena Rises It's not a problem unique to ChatGPT, as AIs, especially conversational ones that double as a search engine, are incredibly costly to run, because the expensive and specialized chips behind them are incredibly power-hungry. That's exactly why Microsoft — which has invested billions of dollars in OpenAI — is readying its own proprietary AI chip. Internally known as "Athena," it has reportedly been in development since 2019, and is now available to a select few Microsoft and OpenAI employees, according to The Information's report. In deploying the chip, Microsoft hopes to replace the current Nvidia graphics processing units it's using in favor of something more efficient, and thereby, less expensive to run. And the potential savings, to put it lightly, could be huge. "Athena, if competitive, could reduce the cost per chip by a third when compared with Nvidia's offerings," Patel told The Information. Though this would mark a notable first foray into AI hardware for Microsoft — it lags behind competitors Google and Amazon who both have in-house chips of their own — the company likely isn't looking to replace Nvidia's AI chips across the board, as both parties have recently agreed to a years-long AI collaboration. Right On Time Nevertheless, if Athena is all that the rumors make it out to be, it couldn't be coming soon enough. Last week, OpenAI CEO Sam Altman remarked that "we're at the end of the era" of "giant AI models," as large language models like ChatGPT seem to be approaching a point of diminishing returns from their massive size. With a reported size of over one trillion parameters, OpenAI's newest GPT-4 model might already be approaching the limit of practical scalability, based on OpenAI's own analysis. While bigger size has generally meant more power and greater capabilities for an AI, all that added bloat will drive up costs, if Patel's analysis is correct. But given ChatGPT's runaway success, OpenAI probably isn't hurting for money. Copyright ©, Camden Media Inc All Rights Reserved.
ChatGPT provides low paying jobs it will replace. ChatGPT is powered by these contractors making $15 an hour Two OpenAI contractors spoke to NBC News about their work training the system behind ChatGPT. https://www.nbcnews.com/tech/innova...actors-talk-shadow-workforce-powers-rcna81892