Additionally let's outline that AI is primarily hype rather than reality. Even companies that have nothing to do with AI are inventing an AI narrative in their marketing literature. At the top of the list are the executives of AI development companies who are pitching their narratives about a fundamentally changed world due to their AI products. Most of their exaggerated claims are bunk. In Anthropic CEO Dario Amodei's fantasy AI world "cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don’t have jobs including half of all entry-level office jobs being eliminated". All within a couple of years from now. The ‘white-collar bloodbath’ is all part of the AI hype machine https://edition.cnn.com/2025/05/30/business/anthropic-amodei-ai-jobs-nightcap We may want to mention that Anthropic's AI product (Claude) is a market underperformer compared to all the other AI products in the market in terms of uptake. Anthropic has a mere 0.11% market share -- which is steadily declining. ChatGPT, GROK, Gemini, Deepseek, etc. and others are greatly more used in a very crowded market. This is why Anthropic CEO spends his time making shocking statements to stir up the market -- maybe he should focus his time in turning his company around and actually creating an AI tool beyond Claude that is useful to multiple niches in the AI market and greatly increases their market share so the company can either be acquired or have an IPO.
@gwb-trading @Nine_Ender I just can't understand how someone can't see this coming. Is it really your opinion that smarter, faster, and cheaper doesn't win in business? https://time.com/7289692/when-ai-replaces-workers/ What Happens When AI Replaces Workers? On Wednesday, Anthropic CEO Dario Amodei declared AI could eliminate half of all entry level white collar jobs within five years. Last week, a senior LinkedIn executive reported that AI is already starting to take jobs from new grads. In April, Fiverr’s CEO made it clear: “AI is coming for your job. Heck, it’s coming for my job too.” Even the new Pope is warning about AI’s dramatic potential to reshape our economy. Why do they think this? The stated goal of the major AI companies is to build artificial general intelligence, or AGI, defined as “a highly autonomous system that outperforms humans at most economically valuable work.” This isn’t empty rhetoric—companies are spending over a trillion dollars to build towards AGI. And governments around the world are supporting the race to develop this technology. They’re on track to succeed. Today’s AI models can score as well as humans on many standardized tests. They are better competitive programmers than most programming professionals. They beat everyone except the top experts in science questions. As a result, AI industry leaders believe they could achieve AGI sometime between 2026 and 2035. Among insiders at the top AI companies, it’s the near-consensus opinion that the day of most people’s technological unemployment, where they lose their jobs to AI, will arrive soon. AGI is coming for every part of the labor market. It will hit white collar workplaces first, and soon after will reach blue collar workplaces as robotics advances. In the post-AGI world, an AI can likely do your work better and cheaper than you. While training a frontier AI model is expensive, running additional copies of it is cheap, and the associated costs are rapidly getting cheaper. A commonly proposed solution for an impending era of technological unemployment is government-granted universal basic income (UBI). But this could dramatically change how citizens participate in society because work is most people’s primary bargaining chip. Our modern world is upheld with a simple exchange: you work for someone with money to pay you, because you have time or skills that they don’t have. The economy depends on workers’ skills, judgment, and consumption. As such, workers have historically bargained for higher wages and 40-hour work weeks because the economy depends on them. With AGI, we are posed to change, if not entirely sever, that relationship. For the first time in human history, capital might fully substitute for labor. If this happens, workers won’t be necessary for the creation of value because machines will do it better and cheaper. As a result, your company won’t need you to increase their profits and your government won’t need you for their tax revenue. We could face what we call “The Intelligence Curse”, which is when powerful actors such as governments and companies create AGI, and subsequently lose their incentives to invest in people. Just like in oil-rich states afflicted with the “resource curse,” governments won’t have to invest in their populations to sustain their power. In the worst case scenario, they won’t have to care about humans, so they won’t. But our technological path is not predetermined. We can build our way out of this problem. Many of the people grappling with the other major risks from AGI—that it goes rogue, or helps terrorists create bioweapons, for example—focus on centralizing and regulatory solutions: track all the AI chips, require permits to train AI models. They want to make sure bad actors can’t get their hands on powerful AI, and no one accidentally builds AI that could literally end the world. However, AGI will not just be the means of mass destruction—it will be the means of production too. And centralizing the means of production is not just a security issue, it is a fundamental decision about who has power. We should instead avert the security threats from AI by building technology that defends us. AI itself could help us make sure the code that runs our infrastructure is secure from attacks. Investments in biosecurity could block engineered pandemics. An Operation Warp Speed for AI alignment could ensure that AGI doesn’t go rogue. And if we protect the world against the extreme threats that AGI might bring about, we can diffuse this technology broadly, to keep power in your hands. We should accelerate human-boosting AI over human-automating AI. Steve Jobs once called computers “bicycles for the mind,” after the way they make us faster and more efficient. With AI, we should aim for a motorcycle for the mind, rather than a wholesale replacement of it. The market for technologies that keep and expand our power will be tremendous. Already today, the fastest-growing AI startups are those that augment rather than automate humans, such as the code editor Cursor. And as AI gets ever more powerful and autonomous, building human-boosting tools today could set the stage for human-owned tools tomorrow. AI tools could capture the tacit knowledge visible to you every day and turn it into your personal data moat. The role of the labor of the masses can be replaced either with the AI and capital of a few, or the AI and capital of us all. We should build technologies that let regular people train their own AI models, run them on affordable hardware, and keep control of their data—instead of everything running through a few big companies. You could be the owner of a business, deploying AI you control on data you own to solve problems that feel unfathomable to you today. Your role in the economy could move from direct labor, to managing AI systems like the CEO of a company manages their direct reports, to steering the direction of AI systems working for you like a company board weighing in on long-term direction. The economy could run on autopilot and superhumanly fast. Even when AI can work better than you, if you own and control your piece of it, you could be a player with real power—rather than just hoping for UBI that might never come. To adapt the words of G. K. Chesterton, the problem with AI capitalism is if there aren’t enough capitalists. If everyone owns a piece of the AI future, all of us can win. And of course, AGI will make good institutions and governance more important than ever. We need to strengthen democracy against corruption and the pull of economic incentives before AGI arrives, to ensure regular people can win if we reach the point where governments and large corporations don’t need us. What’s happening right now is an AGI race, even if most of the world hasn’t woken up to it. The AI labs have an advantage in AI, but to automate everyone else they need to train their AIs in the skills and knowledge that run the economy, and then go and outcompete the people currently providing those goods and services. Can we use AI to lift ourselves up, before the AI labs train the AIs that replace us? Can we retain control over the economy, even as AI becomes superintelligent? Can we achieve a future where power still comes from the people? It is up to us all to answer those questions.
Because of those of us who have meaningful experience in technology understand the dynamics in play whereas you clearly don't. The only thing threatening employment right now is Trump's ridiculous economic policies. Also note Q1 Canada's GDP was +2.2% and the US was -0.1%. That's quite a change from recent years what changed late last year .
This is the same Time magazine who had the cover article in the 1970s about the coming near-term ice age that would make most of the northern United States uninhabitable by the turn of the century. Two decades later, the same Time magazine was publishing that global warming was going to kill all of us. I am not going to take their alarmist articles about AI replacing workers seriously. Certainly it serves Time's purpose of earning revenue via clicks and sales. Now go read all the articles from 1900 about how automobiles were going to cause wide-scale unemployment across the U.S. in the buggy whip and other businesses -- effectively causing the end of the U.S. economy.
Anthropic's Claude can't even successfully run their own blog. The amusing part is that Claude primarily pitches itself as a tool for authors. Anthropic's AI-generated blog dies an early death https://finance.yahoo.com/news/anthropics-ai-generated-blog-dies-150127043.html A week after TechCrunch profiled Anthropic's experiment to task the company's Claude AI models with writing blog posts, Anthropic wound down the blog and redirected the address to its homepage. Sometime over the weekend, the Claude Explains blog disappeared — along with its initial few posts. A source familiar tells TechCrunch the blog was a "pilot" meant to help Anthropic's team combine customer requests for explainer-type "tips and tricks" content with marketing goals. Claude Explains, which had a dedicated page on Anthropic's website and was edited for accuracy by humans, was populated by posts on technical topics related to various Claude use cases (e.g. “Simplify complex codebases with Claude”). The blog, which was intended to be a showcase of sorts for Claude's writing abilities, wasn't clear about how much of Claude's raw writing was making its way into each post. An Anthropic spokesperson previously told TechCrunch that the blog was overseen by "subject matter experts and editorial teams" who “enhance[d]” Claude’s drafts with “insights, practical examples, and […] contextual knowledge.” The spokesperson also said Claude Explains would expand to topics ranging from creative writing to data analysis to business strategy. Apparently, those plans changed in pretty short order. "[Claude Explains is a] demonstration of how human expertise and AI capabilities can work together,” the spokesperson told TechCrunch earlier this month. "[The blog] is an early example of how teams can use AI to augment their work and provide greater value to their users. Rather than replacing human expertise, we’re showing how AI can amplify what subject matter experts can accomplish." Claude Explains didn't get the rosiest reception on social media, in part due to the lack of transparency about which copy was AI-generated. Some users pointed out it looked a lot like an attempt to automate content marketing, an ad tactic that relies on generating content on popular topics to serve as a funnel for potential customers. More than 24 websites were linking to Claude Explains posts before Anthropic wound down the pilot, according to search engine optimization tool Ahrefs. That's not bad for a blog that was only live for around a month. Anthropic might've also grown wary of implying Claude performs better at writing tasks than is actually the case. Even the best AI today is prone to confidently making things up, which has led to embarrassing gaffes on the part of publishers that have publicly embraced the tech. For example, Bloomberg has had to correct dozens of AI-generated summaries of its articles, and G/O Media’s error-riddled AI-written features — published against editors’ wishes — attracted widespread ridicule.