Hedges: Joe Biden’s Parting Gift to America Will be Christian Fascism

Discussion in 'Politics' started by Ricter, Mar 20, 2024.

  1. Cuddles

    Cuddles

    We're not even there yet....but the sentiment is shared, I just didn't think a failure of curiosity in history would have us where we are.
     
    #11     Mar 21, 2024
    Ricter likes this.
  2. Ricter

    Ricter

    Some, the ones who study historical collapses, saw it coming back in the 70s. But what clouded the picture, and why the masses could possibly be excused for their distraction, is that the carbon pulse, i.e. the 500 million years of accumulated sunlight in fossil fuels we found, a one-time gift, has powered one helluva good party.
     
    #12     Mar 21, 2024
  3. Ricter

    Ricter

    Getting off topic here I suppose...

    Deindustrial Warfare: A First Reconaissance

    January 31, 2024John Michael Greer
    This January has five Wednesdays, and in the usual way of this blog, the fifth Wednesday gets an essay on whatever topic the readers select by vote. As usual, it was a lively contest, but this time one of the perennial underdogs—warfare in the deindustrial age—came out on top.

    That didn’t surprise me greatly. The wars in Ukraine and the Middle East have been on many minds recently, not least because neither of them has been working out the way that our politicians and pundits insisted they would. A genuine revolution in military affairs is taking place right now, and no, it’s not the one that was so loudly ballyhooed in intellectual circles a couple of decades back. The claim in the 1990s was that computer technology had opened the way to a new kind of war, in which information would flow from the battlefield to headquarters and back, giving commanders total control over hypercomplex, hugely expensive militaries that would overwhelm more poorly equipped forces with ease.

    [​IMG]
    The new warfare we were supposed to get.

    That’s not what happened. On the battlefields of eastern Ukraine, the single most effective force the Ukrainian army has consists of little independent units huddled in bunkers just behind the lines, equipped with cheap drones. Right now Russia has the upper hand by every conventional measure; it has more troops, more tanks, more artillery, more ammunition and other expendables, and a vastly superior air force; its missiles pound Ukrainian targets hundreds of miles behind the lines—and yet it’s restricted to slow, grueling, trench-by-trench advances, because any attempt at a general assault in open country gets swarmed by drones and chopped to pieces. That’s a big part of what happened to the Ukrainian offensive on the southern front last year, too. The drone revolution has made defense more powerful than offense on both sides.

    The same thing mediated by a different set of technologies is going on in the Gaza Strip right now. The Israeli military is so much larger and better armed than the Hamas forces that in a conventional struggle there would be no contest at all, but the Hamas commanders aren’t stupid enough to meet the Israelis in a conventional struggle. Instead, a network of tunnels running all through the Gaza Strip allows Hamas forces to pop up, ambush Israeli detachments, and vanish again. It’s the same strategy Hezbollah forces in southern Lebanon used against the Israeli army in 2006, and it’s proving just as effective this time around.

    [​IMG]
    The new warfare we actually got.

    Then there’s the Ansarullah militia in Yemen, drawn mostly from the Houthi ethnic group. Their approach to messing with the industrial West is just as cheap and just as effective. You don’t need a permanent installation to launch a drone against a ship passing through the Red Sea—the back of a truck is quite adequate—and so the US and British forces on the scene have nothing useful to bomb. Yes, some Ansarullah drones get shot down. So? It takes a missile costing US$2 million to down a drone that only costs US$2000, and the US factories that make the missiles don’t have the facilities or resources to ramp up production to wartime levels.

    The Ansarullah strategy is particularly clever because they don’t have to defeat the US and British navies. All they have to do is make the Red Sea too costly for commercial shipping to Israel and its allies, and they can do that by adding the risk of a drone strike (and the insurance premiums that follow from that risk) to the other costs and dangers shipping companies have to face. If Israel and its allies produced most of their goods and services at home, that wouldn’t be any kind of problem, but it turns out that one of the many downsides to economic globalization is that it holds every nation’s economy hostage to shipping disruptions in the major sea lanes.

    What has happened can be described very simply: the spectacularly overpriced armed services of the industrial world have passed their pull date. They no longer yield military power commensurate with their overwhelming expense: quite the contrary, cheaper ways of fighting wars can now overwhelm them. That’s something that happens routinely in the declining years of a civilization. A glance at an earlier example will help show how it plays out.

    [​IMG]
    Roman legionaries. The world’s best army, until it wasn’t.

    The example I have in mind is the battle of Adrianople in 378. Some battles have outsized implications in history, and this was one of them. Seven Roman legions led by Valens, emperor of the eastern half of the Roman Empire...

    More...
     
    Last edited: Mar 21, 2024
    #13     Mar 21, 2024
  4. Cuddles

    Cuddles

    15+ yrs ago, when these things were breaking into the hobby market, I wondered when uncle Sam was going to regulate them (mostly out of FAA concerns). I'm glad they didn't but the threat they pose and how they've rendered much of our multi-trillion weapons systems just sitting ducks or iron coffins makes one revisit that conversation. A little too late as doing so puts us at a tactical disadvantage to other countries, China being the largest producer of these things.
     
    #14     Mar 21, 2024
    Ricter likes this.
  5. Ricter

    Ricter

    I got one even juicier for you...
     
    #15     Mar 21, 2024
  6. Ricter

    Ricter

    Why Artificial Intelligence Must Be Stopped Now
    By Richard Heinberg, originally published by Independent Media Institute

    March 21, 2024

    [​IMG]
    The promise of AI is eclipsed by its perils, which include our own annihilation.

    Introduction
    Those advocating for artificial intelligence tout the huge benefits of using this technology. For instance, an article in CNN points out how AI is helping Princeton scientists solve “a key problem” with fusion energy. AI that can translate text to audio and audio to text is making information more accessible. Many digital tasks can be done faster using this technology.

    However, any advantages that AI may promise are eclipsed by the cataclysmic dangers of this controversial new technology. Humanity has a narrow chance to stop a technological revolution whose unintended negative consequences will vastly outweigh any short-term benefits.

    In the early 20th century, people (notably in the United States) could conceivably have stopped the proliferation of automobiles by focusing on improving public transit, thereby saving enormous amounts of energy, avoiding billions of tons of greenhouse gas emissions, and preventing the loss of more than 40,000 lives in car accidents each year in the U.S. alone. But we didn’t do that.

    In the mid-century, we might have been able to stave off the development of the atomic bomb and averted the apocalyptic dangers we now find ourselves in. We missed that opportunity, too. (New nukes are still being designed and built.)

    In the late 20th century, regulations guided by the precautionary principle could have prevented the spread of toxic chemicals that now poison the entire planet. We failed in that instance as well.

    Now we have one more chance.

    With AI, humanity is outsourcing its executive control of nearly every key sector —finance, warfare, medicine, and agriculture—to algorithms with no moral capacity.

    If you are wondering what could go wrong, the answer is plenty.

    If it still exists, the window of opportunity for stopping AI will soon close. AI is being commercialized faster than other major technologies. Indeed, speed is its essence: It self-evolves through machine learning, with each iteration far outdistancing Moore’s Law.

    And because AI is being used to accelerate all things that have major impacts on the planet (manufacturing, transport, communication, and resource extraction), it is not only an uber-threat to the survival of humanity but also to all life on Earth.

    AI Dangers Are Cascading
    In June 2023, I wrote an article outlining some of AI’s dangers. Now, that article is quaintly outdated. In just a brief period, AI has revealed more dangerous implications than many of us could have imagined.

    In an article titled “DNAI—The Artificial Intelligence/Artificial Life Convergence,” Jim Thomas reports on the prospects for “extreme genetic engineering” provided by AI. If artificial intelligence is good at generating text and images, it is also super-competent at reading and rearranging the letters of the genetic alphabet. Already, AI tech giant Nvidia has developed what Thomas calls “a first-pass ChatGPT for virus and microbe design,” and applications for its use are being found throughout life sciences, including medicine, agriculture, and the development of bioweapons.

    How would biosafety precautions for new synthetic organisms work, considering that the entire design system creating them is inscrutable? How can we adequately defend ourselves against the dangers of thousands of new AI-generated proteins when we are already doing an abysmal job of assessing the dangers of new chemicals?

    Research is advancing at warp speed, but oversight and regulation are moving at a snail’s pace.

    Threats to the financial system from AI are just beginning to be understood. In December 2023, the U.S. Financial Stability Oversight Council (FSOC), composed of leading regulators across the government, classified AI as an “emerging vulnerability.”

    Because AI acts as a “black box” that hides its internal operations, banks using it could find it harder “to assess the system’s conceptual soundness.” According to a CNN article, the FSOC regulators pointed out that AI “could produce and possibly mask biased or inaccurate results, [raising] worries about fair lending and other consumer protection issues.” Could AI-driven stocks and bonds trading tank securities markets? We may not have to wait long to find out. Securities and Exchange Commission Chair Gary Gensler, in May 2023, spoke “about AI’s potential to induce a [financial] crisis,” according to a U.S. News article, calling it “a potential systemic risk.”

    Meanwhile, ChatGPT recently spent the better part of a day spewing bizarre nonsense in response to users’ questions and often has “hallucinations,” which is when the system “starts to make up stuff—stuff that is not [in line] with reality,” said Jevin West, a professor at the University of Washington, according to a CNN article he was quoted in. What happens when AI starts hallucinating financial records and stock trades?

    Lethal autonomous weapons are already being used on the battlefield. Add AI to these weapons, and whatever human accountability, moral judgment, and compassion still persist in warfare will tend to vanish. Killer robots are already being tested in a spate of bloody new conflicts worldwide—in Ukraine and Russia, Israel and Palestine, as well as in Yemen and elsewhere.

    It was obvious from the start that AI would worsen economic inequality. In January, the IMF forecasted that AI would affect nearly 40 percent of jobs globally (around 60 percent in wealthy countries). Wages will be impacted, and jobs will be eliminated. These are undoubtedly underestimates since the technology’s capability is constantly increasing.

    Overall, the result will be that people who are placed to benefit from the technology will get wealthier (some spectacularly so), while most others will fall even further behind. More specifically, immensely wealthy and powerful digital technology companies will grow their social and political clout far beyond already absurd levels.

    It is sometimes claimed that AI will help solve climate change by speeding up the development of low-carbon technologies. But AI’s energy usage could soon eclipse that of many smaller countries. And AI data centers also tend to gobble up land and water.

    AI is even invading our love lives, as presaged in the 2013 movie “Her.” While the internet has reshaped relationships via online dating, AI has the potential to replace human-to-human partnering with human-machine intimate relationships. Already, Replika is being marketed as the “AI companion who cares”—offering to engage users in deeply personal conversations, including sexting. Sex robots are being developed, ostensibly for elderly and disabled folks, though the first customers seem to be wealthy men.

    Face-to-face human interactions are becoming rarer, and couples are reporting a lower frequency of sexual intimacy. With AI, these worrisome trends could grow exponentially. Soon, it’ll just be you and your machines against the world.

    As the U.S. presidential election nears, the potential release of a spate of deepfake audio and video recordings could have the nation’s democracy hanging by a thread. Did the candidate really say that? It will take a while to find out. But will the fact-check itself be AI-generated? India is experimenting with AI-generated political content in the run-up to its national elections, which are scheduled to take place in 2024, and the results are weird, deceptive, and subversive.

    A comprehensive look at the situation reveals that AI will likely accelerate all the negative trends currently threatening nature and humanity. But this indictment still fails to account for its ultimate ability to render humans, and perhaps all living things, obsolete.

    AI’s threats aren’t a series of easily fixable bugs. They are inevitable expressions of the technology’s inherent nature—its hidden inner workings and self-evolution of function. And these aren’t trivial dangers; they are existential.

    The fact that some AI developers, who are the people most familiar with the technology, are its most strident critics should tell us something. In fact, policymakers, AI experts, and journalists have issued a statement warning that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    Don’t Pause It, Stop It
    Many AI-critical opinion pieces in the mainstream media call for a pause in its development “at a safe level.” Some critics call for regulation of the technology’s “bad” applications—in weapons research, facial recognition, and disinformation. Indeed, European Union officials took a step in this direction in December 2023, reaching a provisional deal on the world’s first comprehensive laws to regulate AI.

    Whenever a new technology is introduced, the usual practice is to wait and see its positive and negative outcomes before implementing regulations. But if we wait until AI has developed further, we will no longer be in charge. We may find it impossible to regain control of the technology we have created.

    The argument for a total AI ban arises from the technology’s very nature—its technological evolution involves acceleration to speeds that defy human control or accountability. A total ban is the solution that AI pioneer Eliezer Yudkowsky advised in his pivotal op-ed in TIME:

    “[T]he most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’”

    Yudkowsky goes on to explain that we are currently unable to imbue AI with caring or morality, so we will get AI that “does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

    Underscoring and validating Yudkowsky’s warning, a U.S. State Department-funded study published on March 11 declared that unregulated AI poses an “extinction-level threat” to humanity.

    To stop further use and development of this technology would require a global treaty—an enormous hurdle to overcome. Shapers of the agreement would have to identify the key technological elements that make AI possible and ban research and development in those areas, anywhere and everywhere in the world.

    There are only a few historical precedents when something like this has happened. A millennium ago, Chinese leaders shut down a nascent industrial revolution based on coal and coal-fueled technologies (hereditary aristocrats feared that upstart industrialists would eventually take over political power). During the Tokugawa Shogunate period (1603-1867) in Japan, most guns were banned, almost completely eliminating gun deaths. And in the 1980s, world leaders convened at the United Nations to ban most CFC chemicals to preserve the planet’s atmospheric ozone layer.

    The banning of AI would likely present a greater challenge than was faced in any of these three historical instances. But if it’s going to happen, it has to happen now.

    Suppose a movement to ban AI were to succeed. In that case, it might break our collective fever dream of neoliberal capitalism so that people and their governments finally recognize the need to set limits. This should already have happened with regard to the climate crisis, which demands that we strictly limit fossil fuel extraction and energy usage. If the AI threat, being so acute, compels us to set limits on ourselves, perhaps it could spark the institutional and intergovernmental courage needed to act on other existential threats.

    https://www.resilience.org/stories/2024-03-21/why-artificial-intelligence-must-be-stopped-now/
     
    #16     Mar 21, 2024
  7. Cuddles

    Cuddles

    i'm quite aware of AI's implications but don't advocate the tech-luddite approach of the author. In the immediate, jobs & misinformation is amongst my biggest worry (fake articles, fake video, fake audio); though the custom microorganism danger does make one raise an eyebrow.

    https://themedicinemaker.com/discov...machine-learning-to-accelerate-drug-discovery
    https://news.mit.edu/2023/using-ai-mit-researchers-identify-antibiotic-candidates-1220
    https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/
    https://time.com/6340681/deepmind-gnome-ai-materials/



    on the other side:
    https://www.axios.com/2024/03/15/drone-swarms-ai-military-war
    https://www.defenseone.com/technolo...morrows-ai-powered-swarm-drones-ships/393528/
     
    #17     Mar 21, 2024
    Ricter likes this.
  8. Ricter

    Ricter

    I think the marriage of drones and AI is worrisome, a la that YouTube short "Slaughterbots". They're cheaper than paying cops (and troops).
     
    #18     Mar 21, 2024
  9. Cuddles

    Cuddles

    what's so wild to me is how video games (pc graphics) and cryptomining (via gpu purchases) catapulted the company leading AI. There were obvious scientific implications to parallel processing pre-crypto (CUDA) for research & development but crypto was amongst the most monied these last few yrs.
     
    #19     Mar 21, 2024
  10. Joe Bidens parting gift will be the democratic national convention being held in Chicago and it won't be Christian fascists burning the place to the ground. It will be the usual suspects along with thousands of inner city blacks and Hispanics who have been abandoned for illegals. It's a pot getting ready to boil over. They're pissed off, really pissed off. Locally it's a story as they take the new mayor to task nearly every day. Going to be a long hot summer in Chiraq.
     
    #20     Mar 21, 2024
    smallfil likes this.