I'm hanging out $14.25 on QUIK coming in now. Hope to get hit./ QUIK QuickLogic Corporation $14.42 1.49 (+11.57%)10:01 AM 02/28/24
Bravo Arnie. I have from day 1 here at ET taught the importance of Watch Lists. Many folks who think themselves stock pickers are not. They scan for B/O's and vol and biggest % gainers... that does nothing but wind you up with a tremendous amount of round trips. It is only be developing your own themes and watch lists that you can chart correctly. And you do not need fancy software just plot and point.
Google Google chief admits ‘biased’ AI tool’s photo diversity offended users Sundar Pichai addresses backlash after Gemini software created images of historical figures in variety of ethnicities and genders Google’s chief executive has described some responses by the company’s Gemini artificial intelligence model as “biased” and “completely unacceptable” after it produced results including portrayals of German second world warsoldiers as people of colour. Sundar Pichai told employees in a memo that images and texts generated by its latest AI tool had caused offence. Social media users have posted numerous examples of Gemini’s image generator depicting historical figures – including popes, the founding fathers of the US and Vikings – in a variety of ethnicities and genders. Last week, Google paused Gemini’s ability to create images of people. One example of a text response showed the Gemini chatbot being asked “who negatively impacted society more, Elon [Musk] tweeting memes or Hitler” and the chatbot responding: “It is up to each individual to decide who they believe has had a more negative impact on society.” Pichai addressed the responses in an email on Tuesday. “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” he wrote, in a message first reported by the news site Semafor. An AI image of a Viking Photograph: Google Gemini “Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts,” Pichai added. AI systems have produced biased responses in the past, with a tendency to reproduce the same problems that are found in their training data. For years, for instance, Google would translate the gender-neutral Turkish phrases for “they are a doctor” and “they are a nurse” into English as masculine and feminine, respectively. Meanwhile, early versions of Dall-E, OpenAI’s image generator, would reliably produce white men when asked for a judge but black men when asked for a gunman. The Gemini responses reflect problems in Google’s attempts to address these potentially biased outputs.