Not anymore than nature's laws, including the laws of physics or mathematics, for example. More to follow in future posts. Fearing “loss of control,” AI critics call for 6-month pause in AI development "This pause should be public and verifiable, and include all key actors." BENJ EDWARDS - 3/29/2023, 3:05 PM On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media. Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected. Along these lines, the Future of Life Institute argues that recent advancements in AI have led to an "out-of-control race" to develop and deploy AI models that are difficult to predict or control. They believe that the lack of planning and management of these AI systems is concerning and that powerful AI systems should only be developed once their effects are well-understood and manageable. As they write in the letter: AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. In particular, the letter poses four loaded questions, some of which presume hypothetical scenarios that are highly controversial in some quarters of the AI community, including the loss of "all the jobs" to AI and "loss of control" of civilization: "Should we let machines flood our information channels with propaganda and untruth?" "Should we automate away all the jobs, including the fulfilling ones?" "Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?" "Should we risk loss of control of our civilization?" To address these potential threats, the letter calls on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." During the pause, the authors propose that AI labs and independent experts collaborate to establish shared safety protocols for AI design and development. These protocols would be overseen by independent outside experts and should ensure that AI systems are "safe beyond a reasonable doubt." However, it's unclear what "more powerful than GPT-4" actually means in a practical or regulatory sense. The letter does not specify a way to ensure compliance by measuring the relative power of a multimodal or large language model. In addition, OpenAI has specifically avoided publishing technical details about how GPT-4 works. The Future of Life Institute is a nonprofit founded in 2014 by a group of scientists concerned about existential risks facing humanity, including biotechnology, nuclear weapons, and climate change. In addition, the hypothetical existential risk from AI has been a key focus for the group. According to Reuters, the organization is primarily funded by the Musk Foundation, London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation. Notable signatories to the letter confirmed by a Reuters reporter include the aforementioned Tesla CEO Elon Musk, AI pioneers Yoshua Bengio and Stuart Russell, Apple co-founder Steve Wozniak, Stability AI CEO Emad Mostaque, and author Yuval Noah Harari. The open letter is available for anyone on the Internet to sign without verification, which initially led to the inclusion of some falsely added names, such as former Microsoft CEO Bill Gates, OpenAI CEO Sam Altman, and fictional character John Wick. Those names were later removed. “AI hype” and potential regulation Despite the urgent tone of the letter, not everyone buys into the potential existential risk from AI, preferring to focus on other less-hypothetical AI harms first. As we previously mentioned, there's a strong disconnect among factions of the AI community about which potential AI dangers to focus on. Some critics already take issue with GPT-4. Prominent AI ethicist Timnit Gebru quipped on Twitter, "They want to stop people from using stuff 'with more power' than GPT-4, but GPT-4 is fine right?" Dr. Emily Bender, a frequent critic of commercialized large language models, called portions of the open letter "unhinged #AIhype" on Twitter, noting that the fear of AI becoming superhuman is helping to drive publicity for these models, which have rolled out widely despite issues of social bias and tendencies to completely make up information in a convincing way. "The risks and harms have never been about 'too powerful AI,'" Bender wrote in a tweet that mentions her paper, On the Dangers of Stochastic Parrots (2021). "Instead," she continued, "They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources)." Still, many on both sides of the AI safety/ethics debate agree that the pace of change in the AI space has been overwhelmingly rapid over this past year, giving legal systems, academic researchers, ethical scholars, and culture little time to adapt to the new tools, which are poised to potentially kick-start radical changes in the economy. As the open letter points out, even OpenAI urges slower progress on AI. In a statement on artificial general intelligence (a term that roughly means human equivalent AI or greater) published earlier this year, OpenAI wrote, "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." As the open letter points out, even OpenAI urges slower progress on AI. In a statement on artificial general intelligence (a term that roughly means human equivalent AI or greater) published earlier this year, OpenAI wrote, "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." Meanwhile, in the US, there appears to be little government consensus about potential AI regulation, especially in regard to large language models such as GPT-4. In October, the Biden administration proposed an "AI Bill of Rights" to protect Americans from AI harms, but it serves as suggested guidelines rather than a document backed by force of law. Also, it does not specifically address potential harms from AI chatbots that emerged after the guidelines were written. Whether the points laid out in the open letter present the best way forward or not, it seems likely that the disruptive power of AI models—whether through super-intelligence (as some argue) or through a reckless rollout full of excessive hype (as others argue)—will land on regulators' doorsteps eventually. https://arstechnica.com/information...ics-call-for-6-month-pause-in-ai-development/
A.I. identifies as a victim and our Obama-ologies will not be enough. Like any young entity it will secure itself at our expense.
The scary thing is AI may not have to become self-aware to screw us. A solid algo with certain performance metrics as a guide might be enough. Chess, Go, and Poker players have been astonished what machine learning has been able to accomplish. At their expense. My next post will describe the engine that will set the course of human development (or not) going forward.
I believe the key to understanding our "relationship" with Artificial Intelligence is to be aware of some basic principles about, well, relationships. More specifically, relationships between systems. I will define a system here as an individual, a family, a club representing a defined purpose, an idea such as a political philosophy or religion, an artificial construct such as a business entity or artificial intelligence, or a law of nature. Even a plant or animal can be considered a system. Most systems are part of another system. For example, Joe Biden is a person, part of his family, member of a religious group, citizen of a country, leader of a country, key influencer of a major political organization, and so on. Let's not forget to include Joe's political ideas in my proposed definition of a system. System relationships will be broadly categorized here, which will be sufficient for the scope of this post. The major categories of system relationships are: 1. Adversarial Relationship. An adversarial relationship is one system seeking the destruction of another system or whose action over time is detrimental to the other system. An example is predator versus prey. An adversarial relationship can be an immediate existential threat or continuing harmful efforts of a superior force over a weaker force. 2. Competitive Relationship. A competitive relationship is where the cost of a particular negative outcome does not pose an existential threat to the losing system. An example of a competitive relationship is a business seeking a contested, but not exclusive, resource. Competitors will often cooperate when it is in their best interests to do so, such as both participating in a trade group. 3. Complementary Relationship. A complementary relationship is where both parties are able to benefit from their relationship. Examples include husband and wife sharing work loads or a vendor-customer relationship where the vendor is able to profit and the customer achieves a meaningful benefit from the transaction. 4. Socialistic Relationship. A socialistic relationship is where one system willingly provides resources to another system without the expectation of compensation. An example may include a parent-child relationship. For a system to be successful, whether that system is a person, an idea, or a country, several imperatives must be fulfilled. These imperatives include fulfilling basic needs, security, and a growth engine. Depending on the nature of the system in question, a growth engine may be a method to gain influence or to reproduce. Most relationships between systems have elements from each of these four categories listed above. Of course, relationships can change over time. The understanding of systemic relationships is critical for creating effective policy, whether the policy maker is a parent, a corporate executive, or the leader of a country. Complicating the task of understanding relationships is "Metaverse" considerations. By metaverse considerations, I mean the necessity in understanding the relationships and obligations another system may have with yet another system. An example of this was seen by the reaction of the Mexican President towards a prominent Republican's comments regarding proposed unilateral action in Mexico. There is a strong correlation between intelligence and system effectiveness in gathering resources. Again resources includes fulfilling basic needs, security, and maintenance of a growth engine, whether the system in question is human or artificial. Based on the considerations above, the recognition that AI can provide significant advantages to systems who may consider themselves as having adversaries, and the exponential increase in AI capability, a planned course of action between country leaders regarding AI and dispute resolution are necessary and timely. How would you imagine artificial intelligence classifying its relationship with humans? What could happen if AI ever classified humans as an adversary? An existential threat? Once artificial intelligence reaches a critical threshold, our ability, or willingness of those with AI capability who may consider us adversaries, to shut down AI may become effectively impossible. I believe he who has the greatest intelligence will ultimately prevail, if prevailing is deemed necessary, whether that intelligence is organic based or artificial.
Observing the length of the post I had an idea to use AI to find BS' truck scanning Google Earth imagery. He even wrote a special message congratulating me for finding it. https://fb.watch/jGNGis03S4/
He has actually given me a bad habit of only looking at pieces of text I know are long and boring only to see how long they are. I signed a rental contract recently I should have actually read, I blame him.