Artificial Intelligence

Discussion in 'Chit Chat' started by expiated, May 29, 2023.

  1. gwb-trading

    gwb-trading

    Soon software engineers will simply be generating the majority of their code in AI and focusing on other aspects of their role (SW architecture, documentation, testing, etc.).

    This does however bring into question the issue of products using open source material (taken in from the generative AI) without agreeing to license agreements in their products. Coupled with the reality that a bug introduced by AI is likely to be broadly found across products from multiple companies.

    OpenAI launches Codex, an AI coding agent, in ChatGPT
    https://finance.yahoo.com/news/openai-launches-codex-ai-coding-150000373.html
     
    #41     May 16, 2025
  2. gwb-trading

    gwb-trading

    Run AI locally on your phone.

    Google quietly released an app that lets you download and run AI models locally
    https://finance.yahoo.com/news/google-quietly-released-app-lets-140446513.html

    Last week, Google quietly released an app that lets users run a range of openly available AI models from the AI dev platform Hugging Face on their phones.

    Called Google AI Edge Gallery, the app is available for Android and will soon come to iOS. It allows users to find, download, and run compatible models that generate images, answer questions, write and edit code, and more. The models run offline, without needing an internet connection, tapping into supported phones' processors.

    AI models running in the cloud are often more powerful than their local counterparts, but they also have their downsides. Some users might be wary of sending personal or sensitive data to a remote data center, or want to have models available without needing to find a Wi-Fi or cellular connection.

    Google AI Edge Gallery, which Google is calling an "experimental Alpha release," can be downloaded from GitHub by following these instructions. The home screen shows shortcuts to AI tasks and capabilities like "Ask Image" and "AI Chat." Tapping on a capability pulls up a list of models suited for the task, such as Google's Gemma 3n.

    Google AI Edge Gallery also provides a "Prompt Lab" users can use to kick off "single-turn" tasks powered by models, like summarizing and rewriting text. The Prompt Lab comes with several task templates and configurable settings to fine-tune the models' behaviors.

    Your mileage may vary in terms of performance, Google warns. Modern devices with more powerful hardware will predictably run models faster, but the model size also matters. Larger models will take more time to complete a task — say, answering a question about an image — than smaller models.

    Google's inviting members of the developer community to give feedback on the Google AI Edge Gallery experience. The app is under an Apache 2.0 license, meaning it can be used in most contexts — commercial or otherwise — without restriction.

    This article originally appeared on TechCrunch at https://techcrunch.com/2025/05/31/g...-lets-you-download-and-run-ai-models-locally/
     
    #42     May 31, 2025
  3. gwb-trading

    gwb-trading

    AI Slop: Last Week Tonight with John Oliver
     
    #43     Jun 23, 2025
    Aquarians likes this.
  4. Come to the restaurant tonight and go straight to the mens room. I can answer all of your AI questions. AI will never take my job away.
     
    #44     Jun 23, 2025
  5. ph1l

    ph1l

    #45     Jun 23, 2025
    spy likes this.
  6. Like honestly, I'm considering giving up on Facebook, I mean Fecesbook, since the content there, already low quality to being with, it's become mostly a cesspool of AI crap. And they even have the option of paid subscription in order not to show ads. Like YouTube, but they are nowhere near YouTube in quality of content. Not to mention that recently I'm inundated with AI-generated scam of the Nigerian get rich fast subtlety. I press "report spam" button, they say "thank you, we've taken notice", 5 seconds later I get another ad of the exact same scamy nature. They're both clueless and evil, Fecesbook decision makers I mean.

    I guess it's up to the individual now and there will be again a split between the elite intellectuals and the unwashed masses who only want "panem et circenses" and not real information and culture. I still buy printed magazines, with quality content, written by competent humans rather than third world assholes shitting AI slop. For now in my native language but I'm considering extending subscription to something like The New Yorker, $2.5 per week it's the price of a coffee.

    Overall, there's nothing I can do to stop spammers vomiting AI slop but I can pick my sources and refuse to eat with the pigs from the trough.
     
    Last edited: Jun 24, 2025
    #46     Jun 24, 2025
    gwb-trading likes this.
  7. gwb-trading

    gwb-trading

    So the people hired to train AI systems basically fed them slop.

    The AI Company Zuckerberg Just Poured $14 Billion Into Is Reportedly a Clown Show of Ludicrous Incompetence
    "They just hired everybody who could breathe."
    https://futurism.com/scale-ai-zuckerberg-incompetence

    Presumably, when you dump $14 billion into a company to buy a 49 percent stake — as Meta just did in Scale AI — you're confident that said company a) will help you make a lot of money and b) knows what it's doing.

    But a new scoop from Inc Magazine suggests that Scale AI — co-founded by 28-year-old zillionaire Alexandr Wang, whose first name does indeed lack a letter "e" between the "d" and "r" — is a massive clown show behind the scenes.

    Back when it worked with Google (the two just broke up following Meta's takeover), Scale AI reportedly became overrun with countless "spammers" who fleeced the company for bogus work by taking advantage of its laughable security and vetting protocols — an episode that encapsulates its struggles to meet the demands of a huge client like Google.

    Scale AI is basically a data annotation hub that does essential grunt work for the AI industry. To train an AI model, you need quality data. And for that data to mean anything, an AI model needs to know what it's looking at. Annotators manually go in and add that context.

    As is the means du jour in corporate America, Scale AI built its business model on an army of egregiously underpaid gig workers, many of them overseas. The conditions have been described as "digital sweatshops," and many workers have accused Scale AI of wage theft.

    It turns out this was not an environment for fostering high-quality work.

    According to internal documents obtained by Inc, Scale AI's "Bulba Experts" program to train Google's AI systems was supposed to be staffed with authorities across relevant fields. But instead, during a chaotic 11 months between March 2023 and April 2024, its dubious "contributors" inundated the program with "spam," which was described as "writing gibberish, writing incorrect information, GPT-generated thought processes."

    In many cases, the spammers, who were independent contractors who worked through Scale AI-owned platforms like Remotasks and Outlier, still got paid for submitting complete nonsense, according to former Scale contractors, since it became almost impossible to catch them all. And even if they did get caught, some would come back by simply using a VPN.

    "People made so much money," a former contributor told Inc. "They just hired everybody who could breathe."

    The work often called for advanced degrees that many contributors didn't have, the former contributor said. And seemingly, no one was vetting who was coming in.

    "There were no background checks whatsoever," a former queue manager for Remotasks, who was in charge of reviewing and approving the contributors' work, told Inc. "For example, the clients would have requirements for people working on projects to have certain degrees. But there were no verification checks... Often it was people that weren't native English speakers."

    Spammers "could get away with just totally submitting garbage and there weren't enough people to track them down," the former queue manager added. They also recalled how Scale AI's Allocations team in charge of assigning contributors once "dumped 800 spammers" into their team who proceeded to spam "all of the tasks."

    Attempts at cracking down were crude. Per Inc, various memos and guidelines called for either denying or removing contributors from specific countries, including Egypt, Pakistan, Kenya, and Venezuela.

    The program also got a little taste of the technology it was helping to create. Spammers were submitting so much AI-generated junk that supervisors were advised to use a tool called ZeroGPT, intended to detects ChatGPT usage, to vet entries.

    It makes you wonder just how much gibberish slipped through the cracks and ended up being internalized by Google's AI models. Perhaps it could explain a little about its infamously shoddy AI Overviews feature.

    For its part, a Scale AI spokesperson dismissed the claims.

    "This story is filled with so many inaccuracies, it's hard to keep track," the spokesperson said in a statement to Inc. "What these documents show, and what we explained to Inc ahead of publishing, is that we had clear safeguards in place to detect and remove spam before anything goes to customers."