Artificial Intelligence

Discussion in 'Chit Chat' started by expiated, May 29, 2023.

  1. expiated

    expiated

    Very recently, I have found myself returning to a number of old hobbies, such as learning chess, 2D animation and Spanish; not to mention a new one, which is learning how to interpret the meanings behind people's speech and body language.

    And yet, I still find myself a bit restless at times because I'm here or there with nothing to do, and don't have a book in my hand to increase my knowledge on some topic (like when I'm taking public transportation, or waiting for my order to arrive at a restaurant).

    Perhaps the one thing most responsible for turning me into a profitable trader was learning enough MetaQuotes Language to code the specific indicators I envisioned exactly as I imagined them (after discovering they were not available anywhere on the market).

    And now, I'm seeing that I'm going to run into the same sort of problem when it comes to producing my own animated educational videos, and that knowing how to integrate artificial intelligence (AI) into my efforts would be a powerful solution. (I also wonder if AI wouldn't be able to copy the way I approach trading Forex manually multiple times better than MT4's expert advisors [EAs] are able to do.)

    Therefore, I'm adding artificial intelligence to body language as my two new hobbies, and as I'm doing with the latter, I hope to return to this thread as I am able to refresh my memory on what I've read should this endeavor be interrupted at times, and also to help me synthesize the new information to better absorb it into long-term memory (if I can).

    So, now I know what to do to fill my idle minutes... My plan is to have a book on AI with me everywhere I go, so that I can begin establishing a basic knowledge on the subject during those idle moments throughout my day.

     
    Josephine likes this.
  2. expiated

    expiated

    Screenshot_4.png

    I just went online to put this book on hold at the public library, so I'll go pick it up at the main branch as soon as the Dana branch sends it over...

    Screenshot_4.png
     
    Last edited: May 29, 2023
  3. expiated

    expiated

    I asked Comprehensive AI...

    HOW DO ARTIFICIAL INTELLLIGENCE PROGRAMS WORK?

    Table of contents
    • Introduction
    • How AI works
    • Types of Machine Learning
    • Algorithms and Frameworks
    • Applications of AI
    • Challenges and Ethical Concerns
    • Conclusion

    Introduction

    How AI works

    Artificial Intelligence, or AI, is a complex technology with various applications that can provide insight into a vast amount of unstructured data. But how does AI work? It all starts with data collection, which involves gathering a vast amount of data from various sources. Thanks to the internet and social media, there is a wide range of data available for collection.

    The next step is data cleaning, which is the process of removing any corrupted or unnecessary data. This step is crucial as it ensures that the data is accurate and reliable, which is essential in training the AI models.

    The next step is training the model where the input data is used to train the AI algorithms. This involves using complex mathematical models to learn from the data and make predictions. Then it's time for testing the model. This step is essential as it verifies that the model can accurately predict responses based on new input data and is free from bias or errors. Once the model is deemed ready, it's finally deployed, and the algorithms are implemented into a system, which can be integrated directly into business operations.

    Overall, AI works by gathering vast amounts of data, cleaning it up, and training models to make predictions. While it may seem like a straightforward process, it’s the complex algorithms and mathematical models used to analyze and learn from the data that make AI such an advanced and powerful technology.

    Types of Machine Learning
    Types of Machine Learning There are four main types of machine learning: supervised, unsupervised, semi-supervised and reinforcement learning. Supervised learning involves training an algorithm with labeled data, in which the machine learns to predict outcomes based on input variables. This type of learning is used in tasks such as spam filtering, fraud detection, and image recognition.

    Unsupervised learning involves training an algorithm with unlabeled data, allowing the machine to identify patterns and relationships within the data on its own. This type of learning is used in tasks such as customer segmentation, anomaly detection, and recommendation systems.

    Semi-supervised learning involves a combination of labeled and unlabeled data, often used when obtaining large amounts of labeled data is impractical or costly. This type of learning is used in tasks such as sentiment analysis and speech recognition.

    Reinforcement learning involves training an algorithm through a system of punishment and reward, learning through trial and error. This type of learning is used in tasks such as game playing, robotics, and self-driving cars.

    As technology advances, machine learning is becoming an increasingly vital part of our lives, with applications ranging from speech recognition to autonomous vehicles. However, challenges such as data bias, privacy concerns, and job displacement must also be addressed in order to ensure the ethical and responsible use of this technology.

    Algorithms and Frameworks
    Algorithms and Frameworks Now that we've covered the basics of how AI works, let's dive into the specific algorithms and frameworks used in the process. One of the most commonly used algorithms is Linear Regression, which predicts a numerical value based on a set of input data.

    Logistic Regression, on the other hand, is used for classification problems, where data is grouped into categories. Moving on to Neural Networks, these are models inspired by the structure of the human brain and are capable of complex tasks such as image and speech recognition.

    Decision Trees are another popular algorithm used for both classification and regression problems. They work by splitting the data into smaller and more manageable portions until a decision is reached.

    Random Forests, as the name suggests, is essentially a collection of Decision Trees that work together to provide a more accurate prediction. SVMs, or Support Vector Machines, are another important algorithm used for classification problems. They work by finding the optimal separation line between two sets of data points.

    K-Nearest Neighbors, as the name implies, selects the nearest data points to the input to make a prediction.

    TensorFlow and PyTorch are popular frameworks used for building neural networks and deep learning models, offering easy-to-use APIs and powerful computational capabilities.

    Understanding the different algorithms and frameworks is crucial in building an effective AI system. However, it's important to keep in mind that the choice of algorithm and framework depends on the specific problem at hand and data available. But no matter which approach you take, building and training an AI model is a complex and exciting process.

    Applications of AI
    When we think about applications of Artificial Intelligence, the most immediate ones that come to mind are Speech Recognition, Image Recognition, and Natural Language Processing. These technologies are already in use today, from Siri helping us send messages without having to type a single character, to Facebook's ability to recognise familiar faces in photographs.

    AI can also be found in Robotics where they help automate tasks and perform repetitive movements.

    Another popular application is autonomous vehicles, allowing us to sit back and relax while our car drives itself

    And let's not forget about Virtual Personal Assistants, the trusty sidekick who can help us complete tasks, provide entertainment, and tell dad jokes. AI has numerous practical applications, many of which we probably aren't even aware of yet.

    Challenges and Ethical Concerns
    Artificial Intelligence (AI) is rapidly changing the way we live and work. However, as we embrace this technology, we must also address the challenges and ethical concerns that come with its use. One of the biggest concerns is data bias. AI algorithms are only as good as the data they are trained on. If the data is biased, the AI will produce biased results, which can perpetuate discrimination and harm individuals.

    Another concern is job displacement. As AI technology becomes more advanced, it may replace certain jobs, leading to unemployment and economic instability.

    Privacy concerns are also a major issue. As AI collects and analyzes massive amounts of data, it raises questions about how that data is being used and who has access to it. Unintended consequences are another challenge.

    AI can produce unexpected results that can have negative consequences, such as in the case of self-driving cars that cause accidents.

    Lastly, transparency and accountability are crucial in ensuring ethical AI practices. It's essential for AI developers to be transparent about how their technology works and to be held accountable for any negative outcomes.

    Overall, while AI has the potential to benefit society, we must also address these challenges and ethical concerns to ensure a positive and responsible future for AI.

    Conclusion
    In a nutshell, Artificial Intelligence works by collecting and cleaning data, training the model, testing it and then deploying it to serve its purpose. The different types of AI include supervised, unsupervised, semi-supervised, and reinforcement learning, while the machine learning algorithms used include logistic regression, TensorFlow, PyTorch, among others. AI has various applications, including speech recognition, image recognition, and natural language processing. However, AI also poses challenges and ethical concerns, such as data bias, job displacement, privacy concerns, unintended consequences, transparency, and accountability. These issues need to be addressed to reap the benefits of AI fully.
     
    ElCubano likes this.
  4. expiated

    expiated

    Mathematics Applied to Artificial Intelligence: A Formal Overview

    Table of contents
    • Introduction
    • Mathematical Foundations for AI
    • Mathematical Techniques in Machine Learning
    • Applications of Mathematics in AI
    • Challenges in Mathematics Applied to AI
    • Future of Mathematics in AI
    • Conclusion

    Introduction
    Mathematics and Artificial Intelligence (AI) go hand in hand. AI is built on various mathematical foundations and techniques. Mathematics applied to AI involves the use of mathematical models and theories to design, develop and improve algorithms that enable computers to simulate human intelligence.

    It is crucial to the development of AI because without it, AI would not exist. Mathematics applied to AI has a brief but rich history. The field has its roots in the 1950s, but it wasn't until the 1980s and 1990s that advancements in computation made AI research more accessible to scientists and researchers. Since then, there have been significant breakthroughs in the field of AI, and mathematics has been a driving force behind many of these advancements.

    Mathematics applied to AI is essential in today's world because of its applications in various fields such as Natural Language Processing (NLP) and Computer Vision. NLP is used in chatbots and virtual assistants like Siri and Alexa, while computer vision is used in facial recognition software and self-driving cars.

    Mathematics applied to AI is also critical in Robotics and Speech Recognition, where algorithms must be able to recognize and interpret human speech and movement accurately. In short, mathematics applied to AI is vital to the development and improvements in AI applications that affect our daily lives. Its importance cannot be overstated, and with advancements in computation, there is no telling where this field may go in the future.

    Mathematical Foundations for AI
    Mathematics provides the groundwork for the development of artificial intelligence (AI). Linear algebra helps in representing data as a multi-dimensional space, whereas calculus helps us in optimizing the learning process. Probability theory and statistics enable us to work with uncertainty and make informed decisions.

    Linear algebra deals with mathematical structures like vectors, matrices, and linear transformations. These structures form the basis of modern machine learning algorithms. For example, image processing algorithms represent images as matrices to enable computer recognition.

    Calculus is used to optimize the learning process of a machine learning model. By using calculus, we can arrive at the minima or maxima of a function, which is crucial in training AI models.

    Probability theory and statistics enable us to quantify uncertainty and variability in data. This comes in handy while designing risk-sensitive algorithms like those that help in credit scoring or prediction of stock prices.

    In conclusion, Mathematics provides the necessary tools to develop AI systems. By utilizing concepts like linear algebra, calculus, probability theory, and statistics, AI researchers can create intelligent models that learn and adapt from data.

    Mathematical Techniques in Machine Learning
    Well, well, well! Looks like we have made it to the section that everyone has been waiting for - Mathematical Techniques in Machine Learning. Honestly, when I first got into this field, the thought of math made me shudder. But then I realized, without math, there would be no machine learning, and without machine learning, what even is AI? So, let's get into it, shall we?

    Firstly, we have supervised learning, which is when the algorithm is trained on labeled data. This means that the input data has already been classified, and the algorithm learns to recognize patterns in the data to predict the output for future inputs.

    Then comes unsupervised learning, where the algorithm is trained on unlabeled data, meaning the data doesn't have predefined categories. The algorithm learns to recognize patterns and group similar data together.

    Next up, we have reinforcement learning, which is learning through trial and error. An agent interacts with an environment, learning from the rewards or punishments it receives for actions taken.

    And finally, we have deep learning, a subset of machine learning that uses neural networks to learn from large amounts of data.

    Phew, that was a lot of learning! But trust me, once you get the hang of it, it becomes quite exciting to see how well your model performs. And don't worry if you're not a math expert, there are plenty of tools and libraries available to make life easier. So buck up, champ! Mathematics applied to AI is not as scary as it seems. Let's keep exploring!

    Applications of Mathematics in AI
    Mathematics is a fundamental component of AI in various domains. Natural Language Processing (NLP), Computer Vision, Robotics, and Speech Recognition applications have relied on the foundations of mathematics.

    In NLP, complex algorithms and probabilistic models are used to understand and generate human language, often considered one of the most challenging problems in AI.

    Similarly, Computer Vision uses linear algebra, calculus, and probability to advance object recognition and image processing capabilities.

    Robotics applications, on the other hand, are dependent on statistical decision-making approaches to simulate human-like cognitive behaviors. Moreover, the implementation of signal processing techniques and machine learning algorithms in Speech Recognition have greatly enhanced the accuracy of speech recognition systems.

    The integration of Mathematics with AI provides opportunities for generating new insights and analysis for solving various problems. Despite challenges like algorithm selection and data bias, the rapid advancements in Machine Learning and Deep Learning techniques show great potential for additional progress in AI.

    Challenges in Mathematics Applied to AI
    Let's be real: with artificial intelligence progressing at lightning speed, we need all the help we can get. Mathematical techniques help machines understand and reason, but applying math to AI has its challenges.

    First up is data quality and bias. Machines learn from data, but if the data is biased, the learning will be too. You might not think people would intentionally feed machines biased data, but come on, this is the real world. Data is often incomplete or outdated as well, which only adds to the problem.

    Then there's high dimensionality. Data sets can have *a lot* of variables, which takes a lot of number crunching to make sense of. We're talking more than a wibble of variables, here. More than a gobble, even. Yes, these are technical terms.

    Algorithm selection can be tricky too. Algorithms are designed to handle different types of data and tasks, so choosing the right one is critical. Because we sure wouldn't want the wrong algorithm leading to catastrophe.

    Finally, there's model interpretability. Machines might learn quickly and accurately, but can we understand how they're doing it? This is an issue because we want to know that machines are making decisions based on sound reasoning. We don't want them turning into that friend who always gives bad advice for mysterious reasons.

    All things considered, math is more important to AI than a steady supply of caffeinated beverages. And that's really saying something.

    Future of Mathematics in AI
    The future of mathematics applied to AI is bright. With constant advancements in technology, the potential for new discoveries is almost limitless. One area that is expected to see significant growth is deep learning. As we continue to generate more data, deep learning techniques will be used to sift through and find useful patterns.

    AI has the potential to impact our society in numerous ways. Self-driving cars are becoming more common, and we can expect to see more automation in industries like healthcare and agriculture. While there are concerns about job loss, the benefits of AI outweigh the risks. With proper regulation, we can ensure that AI is used for the greater good.

    Emerging research areas in mathematics applied to AI include explainable AI and quantum computing. Explainable AI seeks to make AI models more transparent, allowing humans to understand how and why it makes certain decisions. Quantum computing, on the other hand, will allow us to analyze data at a significantly faster rate than current technologies.

    In conclusion, mathematics applied to AI will be crucial in shaping our future. With the potential for advancements in deep learning and the impact on society, we must continue to invest in this field. By addressing the challenges we face and exploring emerging research areas, we can unlock the full potential of AI.

    Conclusion
    Mathematics applied to AI is crucial for the development of the field as it provides the necessary tools for building and improving models. The importance of mathematical foundations such as Linear Algebra, Calculus, Probability Theory, and Statistics cannot be understated. However, as with any rapidly evolving field, there are challenges such as data quality, algorithm selection, and model interpretability. Despite these challenges, the potential for advancements and an impact on society is immense. The future of mathematics in AI is bright, and emerging research areas such as quantum computing and explainable AI hold promise. The possibilities are endless.
     
  5. expiated

    expiated

    There is no universally accepted notion as to what constitutes intelligence. But let's suppose that, at the very least, intelligence involves the following:
    1. Learning
    2. Reasoning
    3. Understanding
    4. Grasping truths
    5. Seeing relationships
    6. Considering meanings
    7. Separating fact from belief
    There is no computer that can fully implement any of these mental activities because what a computer does is rely on machine processes to manipulate data using pure math in a strictly mechanical fashion.

    Nonetheless, a computer CAN simulate different kinds of tasks as intelligence, which we can relate to categories of intelligence such as those defined by Howard Gardner of Harvard University, including:

    Visual-spacial
    Bodily-kinesthetic
    Creative
    Interpersonal
    Intrapersonal
    Linguistic
    Logical-mathematical

    So then, AI doesn’t really have anything to do with human intelligence, but relies on algorithms to achieve results that might or might not have anything to do with human goals or methods of achieving those goals. Even so, we can nonetheless categorize (or define) AI in four ways:
    • Acting humanly
    • Thinking humanly
      • Introspection
      • Psychological testing
      • Brain imaging
    • Thinking rationally
    • Acting rationally
    (As we all know, humans do not always think and act rationally. [Just ask Commander Spock.])

    The above categories constitute four uses or ways to apply AI. But everything mentioned thus far only scratches the surface of what all AI entails. So, in order to form a better basis for understanding AI, we will need to introduce additional concepts. For instance, we can classify AI as strong or weak.

    Strong AI is generalized intelligence that can adapt to a variety of situations. Weak AI is specific intelligence designed to perform a certain task well. The problem with strong AI is that it doesn’t perform any particular task well. The problem with weak AI is that it is too specific to perform tasks independently.

    Other concepts necessary for understanding AI (four classification types promoted by Arend Hintze) are…

    Reactive machines
    Limited memory
    Theory of mind
    Self-awareness

    (I'm electing not to define or describe the above ideas until they actually come up in the course of reviewing or summarizing additional information.)
     
  6. expiated

    expiated

    According to John Paul Mueller and Luca Massaron (at least at the time they published the 2018 edition of their book) they do not indulge in the negativity espoused by AI's detractors; and the inability of computers to simulate creative or intrapersonal intelligence, along with the fact that theory of mind is beyond their present capability in any commercial form (not to mention that self-awareness requires technologies that are not even remotely possible now) makes clear how modern computers are woefully inadequate for simulating human intelligence, much less actually becoming intelligent themselves.

    Theory of mind: A machine that can assess both its required goals and the potential goals of other entities in the same environment has a kind of understanding that is feasible to some extent today, but not in any commercial form. However, for self-driving cars to become truly autonomous, this level of AI must be fully developed. A self-driving car would not only need to know that it must go from one point to another, but also intuit the potentially conflicting goals of drivers around it and react accordingly. [Pedestrians too, since I seem to recall more than one story of self-driving cars running down people crossing the street.]

    Self-awareness: This is the sort of AI that you see in movies. However, it requires technologies that aren't even remotely possible now because such a machine would have a sense of both self and consciousness. In addition, instead of merely intuiting the goals of others based on environment and other entity reactions, this type of machine would be able to infer the intent of others based on experiential knowledge.

    Who is John Paul Mueller?
    John Paul Mueller is a freelance author and a technical editor.

    Who is Luca Massaron?
    Luca Massaron is a data scientist and a research director specialized in multivariate statistical analysis, machine learning and customer insight with over a decade of experience in solving real world problems and in generating value for stakeholders by applying reasoning, statistics, data mining and algorithms.
     
    Last edited: Jun 1, 2023
  7. expiated

    expiated

    The Briefing
    R. Albert Mohler, Jr.
    Thursday, June 1, 2023


    Well, Are Humans Facing the Threat of Extinction by Artificial Intelligence or Not?
    The Argument Over the Consequences of AI Rages On


    Well, are we facing an imminent threat of extinction or not? And if people actually believe that we are, would it be located in a story on page six? All kinds of interesting questions coming up. After yesterday, a raft of news media reported that tech leaders were warning of a risk of extinction from artificial intelligence. And there's a story here, a real story here.

    First of all, because the AI issue, the artificial intelligence issue, is deservedly front and center right now in a lot of our conversation. And there's a reason for that. Artificial intelligence. Now, remember, intelligence in this sense is at least mimicking a human intelligence at some level. It's not at an advanced level, it's not an equal level, but the use of the word intelligence in this sense, is at least mimicking to some degree human intelligence.

    Now, as you're thinking about artificial intelligence, just take the word intelligence, put artificial in front of it. Now, some people would say, "Well, isn't that just simply what a computer is?" And yet, no. That's not what a computer is. Computers process, but computers do not think, or at least at this point, computers haven't thought, or at least we thought they weren't thinking.

    But as you're looking at this in a very serious vein, artificial intelligence has all of a sudden arrived in a way that has surprised even many of its developers and proponents, even many writing science fiction, they're behind on this. And so the release of products such as ChatGPT and others.

    Just in the matter of the last several months, has at least served public notice that something in the lines of a technological leap, is now taking place and it's right before our eyes. Whether it's truly an advance or not, it's going to take some time for us to understand. But not only time is going to take some moral framework in which we can make the evaluation.

    But we do need to note that even as these technologies are now arising and they're very much a part of our public conversation, they're increasingly being immediately interwoven into the operations of some corporations. Already you have public institutions such as universities and colleges trying to figure out, what does this mean? "Has this student really done the work turned in in this paper?" Huge questions about intellectual ownership. Huge questions about human responsibility, vis-a-vis the machine. But we are also looking at the fact that this is happening faster than actual human intelligence can process the big questions.

    Now at this point, we just need to recognize that the giant, indeed quantum leaps in technology that we've experienced in recent decades, they've eventually pointed to the fact that these machines, which after all acted as if they were thinking, may actually according to some of the engineers have the capacity of thinking, thus the word intelligence. And that's why the development of something like ChatGPT and the other commercial brands out there, it really has changed the moral discussion because we're looking at the anticipation that something's right around the corner.

    Now, the most vivid understanding of what might be right around the corner as a threat, is the fact that AI could turn against the humans, against the humanity that invented it. This is a very old human scenario. It is particularly a scenario that is played out in the imagination in the modern age. But what makes this news story particularly newsworthy, is that in this case, so many of the leading technology figures when it comes to artificial intelligence, they have themselves gathered together and signed a statement warning about the fact that the very technology that they have been developing, might pose a risk of extinction to the entire human race.

    Now, to unpack that, we're going to have to think about this for a moment. Kevin Roose of The New York Times starts his account this way, "A group of industry leaders warned on Tuesday that the artificial intelligence technology they were building might one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars."

    Now, let's just think about that opening paragraph for a moment. That really does sound troubling. And indeed I'm not making light of it. I am pointing to a basic incongruity in moral terms that should come to our attention. With that opening paragraph, we're told that a group of industry leaders who had been involved in developing artificial technology, the way that The New York Times puts it, they've warned about the artificial intelligence technology they were building, and now they who have been building it are warning us that it might, "One day, pose an existential threat to humanity." Now, let's just remind ourselves existential threat means a threat to the existence of humanity.

    So, let's just look at that opening paragraph in that news story in the front page of Wednesday's edition of The New York Times. We're being told that a group of scientists is warning us that they have brought humanity to the point of a possible extinction by a technology that they have been developing.

    Yes, seriously, that's exactly what they're saying. A one senate statement that was released by the scientists through the auspices of the Center for AI Safety was this, "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."

    Now, wait just a moment. We're being told here that what is sought is the mitigation or the lessening of, the reduction of "the risk of extinction from AI." Extinction? Exactly what extinction are they talking about? Well, they're talking particularly about human extinction. So here you have some of the leading scientists in the field, and by the way, that's not an exaggeration.

    Those who had gathered together to make this statement really do represent so many of, if not most of the leading theorist engineers and scientists in this field, the very people who've been developing these technologies, now, they tell us that they're concerned that what they are creating could lead rather quickly, as a matter of fact, to nothing less than the extinction of the entire human race.

    Later in the Times report we read this, "The statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent advancements in so-called large language models, the type of AI system used by ChatGPT and other chatbots have raised fears that AI, artificial intelligence could soon be used to the scale to spread misinformation and propaganda, or that it could eliminate millions of white collar jobs."

    A particular interest is one paragraph in the news article. I would try to describe it, but just hear this, "Eventually, some believe AI could become powerful enough that it could create societal scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen." This really does tell us something about humanity, about human nature, human thinking and human behavior.

    Because here we are looking at the fact that the scientists who after all have been driving this technology, they're the ones who've been inventing it, they've been innovating, they've been developing it, making it more sophisticated, packaging it for public use. Now, they tell us they might just have created something that will bring humanity to a complete end.

    It's also interesting that a newspaper as influential as The New York Times would summarize the statement by saying, that the scientists had stop short "of explaining how that would happen." Is that because they don't know or is that because they don't want to tell us?

    Behind this is also a call even from some of the people who were the leading innovators in this technology. There are now calls to (a), put down a moratorium on any future developments until some of these issues can be worked out. Or (b), create some kind of government agency that would give oversight to this technology as it develops. Or (c), at least come up with some way of understanding how a response to this technology might mitigate its dangers.

    But as you're looking at this, you recognize this is a parable of humanity. Christians operating out of a biblical worldview. We just have to understand a couple of things that come immediately to the fore. For one thing, you have the ability of human beings to create mischief on a massive scale.

    Mischief on a massive scale in terms of modern technology has just increased that problem exponentially. It was one thing when warfare was a matter of throwing spears at one another. It's another thing, when you have the development of explosive devices. It's another thing, when you create airplanes that can drop those devices. It's another thing, if you come up with thermonuclear weapons able to be sent on missiles, traveling now at hypersonic speed.

    Humanity is so often developed its technology towards extremely lethal ends. And at some point we all have to ask the question, "Will those lethal ends be directed eventually at us or at all of us? Is that what we have unleashed?" But there's something else here, and that is, that there's a deep understanding of the morality of knowledge, and this is something that is so crucial to the biblical worldview.

    As a matter of fact, it's so crucial that it takes us to the tree of the knowledge of good and evil, in the Garden of Eden. That was the tree, the fruit of which Adam and Eve were told not to eat, and yet they send, they broke the command of God and they did eat of it, and then, from that point onward, they and all of their descendants. And that means all of us, we have the knowledge of good and evil. And as a matter of fact, we cannot escape the knowledge of good and evil.

    We can put knowledge and intelligence to good ends or we can put knowledge and intelligence to evil ends. And one of the things we need to recognize is that nothing on planet earth is truly morally neutral. If you create this kind of technology, it can be used potentially for good, but at the same time it can be used for evil. Especially when we reach our age, we have reached a moment in human history where we actually have the technology to do enormous damage to the entire planet and to ourselves with the weapons that we create. Now, it turns out that one of those weapons might be something that is identified as artificial intelligence.

    But there's another huge worldview dimension here, and that has to do with what intelligence actually is. Because even as you're looking at the news reports, National Public Radio covered the story and it was the lead story on its website for a while with the headline, "Leading experts warn of a risk of extinction from AI." Extinction? What kind of extinction? Well, they mean, human extinction.

    So there's simply no doubt that this new technology comes with very genuine risks, but here we have to watch the language. And from a Christian worldview, the language becomes really, really important because we are talking about the word intelligence. And at a certain level, intelligence is not an exclusively human measure or an exclusively human capacity.

    On the other hand, when we're talking about human intelligence, we're not just talking about a greater intelligence, we're talking about a different category of intelligence all together. And that becomes very clear when you just observe the difference between, for example, a human being and a dog. Again, all dog lovers I think would agree on the fact, that dogs can be incredibly smart, they can be incredibly intuitive. They also have some senses that human beings do not have. They smell many things that we do not smell. And at least on most days, I think I'm thankful for that.

    But when it comes to analysis, when it comes to self-knowledge, when it comes to the ability to conceptualize, human beings are in an entirely different category. And of course, the Bible explains this not just by greater intelligence or greater cerebral circumference or greater brain mass, instead, it describes this as being made in God's image. That's nothing that can be reduced to the merely physical or physiological. Made in God's image means, that as image bearers of God, we have the capacity, first of all to know him.

    Our dog might like to put himself right in the sunlight because it's warm, but the dog does not thank God for having created a world that gave us the sun as the source of warmth. You have a completely different analytical process going on here. But being made in God's image also means, that we have this enormous capacity to imagine not only factuals but counterfactuals. We can imagine not only what is, but what might be. Not only what is, but what might have been. And arguments about facts and counterfactuals are very much at play in this kind of headline news story.
     
  8. expiated

    expiated

    AN ABBREVIATED HISTORY OF ARTIFICAL INTELLIGENCE

    In 1956 scientists attending a summer workshop held at Dartmouth College predicted machines would be able to reason as effectively as humans within a generation, but they were wrong. Only now are machines able to perform mathematical and logical reasoning as effectively as humans. But, this still leaves seven other categories of intelligence to go!

    At this point (or at least, as of 2018) machines are able to do a moderate to highly effective job of simulating the bodily-kinesthetic intelligence, but only a moderate job with visual-spatial intelligence, a low to moderate job with interpersonal intelligence, a low job with linguistic intelligence, and a poor job with creative and intrapersonal intelligence (meaning they are incapable of simulating them at all).

    Part of the problem is that no one really understands how human beings reason well enough for them to be able to "teach" machines how to simulate it, and with no existing concrete dissertations on the processes involved, it is impossible for AI to replicate them.

    In any case, the first truly useful and successful implementation of AI came with the advent of expert systems in the 1970s and again in the 1980s. These included rule based, which use if/then statements to base decisions or rules of thumb; frame based, which use databases organized into related hierarchies of generic information called frames; and logic based, which rely on set theory to establish relationships.

    These are still seen today, but are no longer called expert systems. For example, spelling and grammar checkers are two kinds of rule based expert systems. In the 1990s, the phrase expert system began to disappear and was replaced by the idea that expert systems were a failure. However, the reality is that they were so successful they became ingrained in the applications that they were designed to support. For example, you no longer need to buy a separate grammar checking application (such as RightWriter) when you're using a word processor because word processors already have a grammar checker built in. Even so, the problem with expert systems is that they can be hard to create and maintain.

    Indeed, AI has followed a path in which proponents overstate what is possible, which induces people who are not technologically inclined but have lots of money to make investments; followed by a period of criticism when AI fails to meet expectations; and then finally, a reduction in funding occurs, referred to as an "AI winter."

    Currently, we are in new phase of overstated hype because of the advent of machine learning, a technology that helps computers learn from data.
     
  9. expiated

    expiated

    Excerpt from the Thursday, June 1st, 2023 podcast of The Briefing, a daily analysis of news and events from a Christian worldview, by R. Albert Mohler, Jr.

    Well, Are Humans Facing the Threat of Extinction by Artificial Intelligence or Not?: The Argument Over the Consequences of AI Rages On

    Well, are we facing an imminent threat of extinction or not? And if people actually believe that we are, would it be located in a story on page six? All kinds of interesting questions coming up. After yesterday, a raft of news media reported that tech leaders were warning of a risk of extinction from artificial intelligence. And there's a story here, a real story here.

    First of all, because the AI issue, the artificial intelligence issue, is deservedly front and center right now in a lot of our conversation. And there's a reason for that. Artificial intelligence. Now, remember, intelligence in this sense is at least mimicking a human intelligence at some level. It's not at an advanced level, it's not an equal level, but the use of the word intelligence in this sense, is at least mimicking to some degree human intelligence.

    Now, as you're thinking about artificial intelligence, just take the word intelligence, put artificial in front of it. Now, some people would say, "Well, isn't that just simply what a computer is?" And yet, no. That's not what a computer is. Computers process, but computers do not think, or at least at this point, computers haven't thought, or at least we thought they weren't thinking.

    But as you're looking at this in a very serious vein, artificial intelligence has all of a sudden arrived in a way that has surprised even many of its developers and proponents, even many writing science fiction, they're behind on this. And so the release of products such as ChatGPT and others.

    Just in the matter of the last several months, has at least served public notice that something in the lines of a technological leap, is now taking place and it's right before our eyes. Whether it's truly an advance or not, it's going to take some time for us to understand. But not only time is going to take some moral framework in which we can make the evaluation.

    But we do need to note that even as these technologies are now arising and they're very much a part of our public conversation, they're increasingly being immediately interwoven into the operations of some corporations. Already you have public institutions such as universities and colleges trying to figure out, what does this mean? "Has this student really done the work turned in in this paper?" Huge questions about intellectual ownership. Huge questions about human responsibility, vis-a-vis the machine. But we are also looking at the fact that this is happening faster than actual human intelligence can process the big questions.

    Now at this point, we just need to recognize that the giant, indeed quantum leaps in technology that we've experienced in recent decades, they've eventually pointed to the fact that these machines, which after all acted as if they were thinking, may actually according to some of the engineers have the capacity of thinking, thus the word intelligence. And that's why the development of something like ChatGPT and the other commercial brands out there, it really has changed the moral discussion because we're looking at the anticipation that something's right around the corner.

    Now, the most vivid understanding of what might be right around the corner as a threat, is the fact that AI could turn against the humans, against the humanity that invented it. This is a very old human scenario. It is particularly a scenario that is played out in the imagination in the modern age. But what makes this news story particularly newsworthy, is that in this case, so many of the leading technology figures when it comes to artificial intelligence, they have themselves gathered together and signed a statement warning about the fact that the very technology that they have been developing, might pose a risk of extinction to the entire human race.

    Now, to unpack that, we're going to have to think about this for a moment. Kevin Roose of The New York Times starts his account this way, "A group of industry leaders warned on Tuesday that the artificial intelligence technology they were building might one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars."

    Now, let's just think about that opening paragraph for a moment. That really does sound troubling. And indeed I'm not making light of it. I am pointing to a basic incongruity in moral terms that should come to our attention. With that opening paragraph, we're told that a group of industry leaders who had been involved in developing artificial technology, the way that The New York Times puts it, they've warned about the artificial intelligence technology they were building, and now they who have been building it are warning us that it might, "One day, pose an existential threat to humanity." Now, let's just remind ourselves existential threat means a threat to the existence of humanity.

    So, let's just look at that opening paragraph in that news story in the front page of Wednesday's edition of The New York Times. We're being told that a group of scientists is warning us that they have brought humanity to the point of a possible extinction by a technology that they have been developing.

    Yes, seriously, that's exactly what they're saying. A one sentence statement that was released by the scientists through the auspices of the Center for AI Safety was this, "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."

    Now, wait just a moment. We're being told here that what is sought is the mitigation or the lessening of, the reduction of "the risk of extinction from AI." Extinction? Exactly what extinction are they talking about? Well, they're talking particularly about human extinction. So here you have some of the leading scientists in the field, and by the way, that's not an exaggeration.

    Those who had gathered together to make this statement really do represent so many of, if not most of the leading theorist engineers and scientists in this field, the very people who've been developing these technologies, now, they tell us that they're concerned that what they are creating could lead rather quickly, as a matter of fact, to nothing less than the extinction of the entire human race.

    Later in the Times report we read this, "The statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent advancements in so-called large language models, the type of AI system used by ChatGPT and other chatbots have raised fears that AI, artificial intelligence could soon be used to the scale to spread misinformation and propaganda, or that it could eliminate millions of white collar jobs."

    A particular interest is one paragraph in the news article. I would try to describe it, but just hear this, "Eventually, some believe AI could become powerful enough that it could create societal scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen." This really does tell us something about humanity, about human nature, human thinking and human behavior.

    Because here we are looking at the fact that the scientists who after all have been driving this technology, they're the ones who've been inventing it, they've been innovating, they've been developing it, making it more sophisticated, packaging it for public use. Now, they tell us they might just have created something that will bring humanity to a complete end.

    It's also interesting that a newspaper as influential as The New York Times would summarize the statement by saying, that the scientists had stop short "of explaining how that would happen." Is that because they don't know or is that because they don't want to tell us?

    Behind this is also a call even from some of the people who were the leading innovators in this technology. There are now calls to (a), put down a moratorium on any future developments until some of these issues can be worked out. Or (b), create some kind of government agency that would give oversight to this technology as it develops. Or (c), at least come up with some way of understanding how a response to this technology might mitigate its dangers.

    But as you're looking at this, you recognize this is a parable of humanity. Christians operating out of a biblical worldview. We just have to understand a couple of things that come immediately to the fore. For one thing, you have the ability of human beings to create mischief on a massive scale.

    Mischief on a massive scale in terms of modern technology has just increased that problem exponentially. It was one thing when warfare was a matter of throwing spears at one another. It's another thing, when you have the development of explosive devices. It's another thing, when you create airplanes that can drop those devices. It's another thing, if you come up with thermonuclear weapons able to be sent on missiles, traveling now at hypersonic speed.

    Humanity has so often developed its technology towards extremely lethal ends. And at some point we all have to ask the question, "Will those lethal ends be directed eventually at us or at all of us? Is that what we have unleashed?" But there's something else here, and that is, that there's a deep understanding of the morality of knowledge, and this is something that is so crucial to the biblical worldview.

    As a matter of fact, it's so crucial that it takes us to the tree of the knowledge of good and evil, in the Garden of Eden. That was the tree, the fruit of which Adam and Eve were told not to eat, and yet they sinned. They broke the command of God and they did eat of it, and then, from that point onward, they and all of their descendants. And that means all of us, we have the knowledge of good and evil. And as a matter of fact, we cannot escape the knowledge of good and evil.

    We can put knowledge and intelligence to good ends or we can put knowledge and intelligence to evil ends. And one of the things we need to recognize is that nothing on planet earth is truly morally neutral. If you create this kind of technology, it can be used potentially for good, but at the same time it can be used for evil. Especially when we reach our age, we have reached a moment in human history where we actually have the technology to do enormous damage to the entire planet and to ourselves with the weapons that we create. Now, it turns out that one of those weapons might be something that is identified as artificial intelligence.

    But there's another huge worldview dimension here, and that has to do with what intelligence actually is. Because even as you're looking at the news reports, National Public Radio covered the story and it was the lead story on its website for a while with the headline, "Leading experts warn of a risk of extinction from AI." Extinction? What kind of extinction? Well, they mean, human extinction.

    So there's simply no doubt that this new technology comes with very genuine risks, but here we have to watch the language. And from a Christian worldview, the language becomes really, really important because we are talking about the word intelligence. And at a certain level, intelligence is not an exclusively human measure or an exclusively human capacity.

    On the other hand, when we're talking about human intelligence, we're not just talking about a greater intelligence, we're talking about a different category of intelligence all together. And that becomes very clear when you just observe the difference between, for example, a human being and a dog. Again, all dog lovers I think would agree on the fact, that dogs can be incredibly smart, they can be incredibly intuitive. They also have some senses that human beings do not have. They smell many things that we do not smell. And at least on most days, I think I'm thankful for that.

    But when it comes to analysis, when it comes to self-knowledge, when it comes to the ability to conceptualize, human beings are in an entirely different category. And of course, the Bible explains this not just by greater intelligence or greater cerebral circumference or greater brain mass, instead, it describes this as being made in God's image. That's nothing that can be reduced to the merely physical or physiological.
     
  10. expiated

    expiated

    May 17, 2023

    Can you copyright the content you make with generative AI?

    Brandon Copple: Head of Content at Descript. Former Editor at Groupon, Chicago Sun-Times, and a bunch of other places. Dad. Book reader. Friend to many Matts.

    You can’t copyright AI-generated images, writing, video, or anything, according to a ruling by the U.S. Copyright Office. So if somebody else wants to reuse something you made using ChatGPT, DALL-E, or any other AI tools, they can and there’s nothing you can do about it, legally (you could do something illegal, like smash their coffee maker with a hammer, but that’s risky, and doesn’t scale).

    The good news: this is just the first step in the development of the law around who owns AI-created or -assisted content. The Copyright Office’s decision will almost certainly be appealed, and the question will wind its way through the court system, potentially all the way to the Supreme Court.

    Here’s an overview of what the Copyright Office’s decision means for creators, and what to expect from the legal system in the coming years (yeah, the law moves in years).

    What the Copyright Office said...

    The ruling came on a copyright application from Kristina Kashtanova, the author of a graphic novel who’d used the AI image-generation tool Midjourney to create the illustrations in a book she wrote. The Copyright Office — a bunch of government lawyers who evaluate copyright applications based on all the existing law — decided she could protect the book itself (the story, and the story plus the illustrations) but not the illustrations.

    Essentially, they’re saying that a creator doesn’t have enough creative control over a generative AI tool to claim its output as their own. More specifically, the Copyright Office said that because you can’t predict exactly what a generative AI tool will create, you can’t copyright it.

    The emphasis on predictability surprises Lisa Oratz, a veteran IP lawyer at Perkins Coie, which is a big law firm. (It’s worth mentioning that she’s also Descript’s lawyer). Predictability had never been a factor in determining human authorship before. “It’s a new standard,” she says.

    The Copyright Office equated the text prompts to telling an artist about an idea you have for a painting, then trying to copyright the work after they paint it. That makes sense if all you provided the artist was ideas, because ideas are not protectable and the painter did the creative work that gives them "authorship."

    Lisa says she would've expected the Copyright Office to focus more on the nature of the inputs — the prompts that Kashtanova wrote in Midjourney to get the images she wanted. The Copyright Office could have looked at the human involvement the artist described in their process and found that it wasn’t detailed enough to constitute sufficient human authorship — the key to determining whether a creator can claim a work as their own.

    While that kind of human creativity has always been required for copyright protection, she says, the level of the creativity required has been pretty minimal. So Lisa believes there could be a decent argument that the prompts in this case were sufficiently detailed to meet the standard.

    Uncharted territory...

    The question the Copyright Office, and soon the courts, are essentially trying to answer here: how much human involvement does a creator have to exert over a machine to claim ownership of its output?

    This 'the first time the legal system has wrestled with that question. It first came up when cameras were invented; the argument then was that you couldn’t protect a photograph as your own, since the machine was the one capturing the image.

    But the Supreme Court rejected that idea. The camera didn’t control what was in the frame, or how it was positioned, lit, exposed, and so on. The photographer, a human, controlled all that stuff, which was sufficient for the photographer to claim authorship, and gave them the right to copyright their photos. That was nearly 150 years ago. The human authorship requirement comes from the Constitution, which gives patent protection to inventors and copyright protection to authors. It's still the standard today.

    The creator in this generative AI case, Kashtanova, tried to compare the creative input she she put into things like prompt design to the things the long-ago Supreme Court found sufficient for photographers. The Copyright Office, surprisingly, introduced this predictability standard instead.

    Lisa questions whether that standard will hold up under scrutiny. After all, there’s some unpredictability involved in taking photos, too — the wind may blow a subject’s hair, or the camera may capture something the photographer didn’t notice — or with making a movie — the actors may do things a producer or director didn’t expect — but those works can obviously be protected by copyright.

    Advice for creators...

    A few days after the Copyright Office’s decision, the lawyer representing Kristina Kashtanova wrote an article tearing into the new “predictability” standard. He used the amusing analogy of Jackson Pollock, who made his art by flinging paint around and seeing what happened; nobody questioned his ability to copyright those works.

    But in a more salient point for creators, the lawyer argued that the Copyright Office was "incorrectly focusing on the output of the tool rather than the input from the human." Lisa tends to agree. Eventually, she believes — or at least hopes — the courts will too. Or it may be that Congress will step in and pass new legislation that lays out how the ownership of AI-produced content will be determined.

    So if you're using generative AI tools to create any part of your content — the cover art for your podcast, the background for your video, anything — the best thing you can do is to be sure you're employing as much human creativity in the process as possible. This might mean writing prompts with as much detail as possible — well beyond just suggesting ideas. You’ll want to be able to show you had a specific expression of your ideas in mind, and you just used the AI as a tool to generate it.

    "'A dinosaur in a teacu'’ probably wouldn’t be specific enough," Lisa says. "But 'a purple dinosaur with flowing yellow hair taking a bath in a porcelain teacup that has a picture of a pink horse' might be."

    Of course, the line between what is merely an idea and what's a specific expression of that idea is subjective, so it may be difficult to know whether what you have added rises to the level of something protectable. We can probably expect things to remain fairly murky, at least for a while.

    For now, Lisa warns that it is important to be aware that even highly detailed involvement in the process may not be sufficient to make the output protectible, as the Copyright Office has seemingly set a very high bar. So there may not be much you can to do prevent others from copying AI-generated output. That’s a key consideration when you’re deciding where and how to use generative AI in your creative process.

    A final note: as Lisa points out, the Copyright Office did indicate that if someone sufficiently modifies generated output, that could be protectable. So, If you’re using generative AI as a starting point — e.g., using ChatGPT to create a rough draft and then re-writing it for your own voice — be sure you document the changes you made before you try to file for copyright protection, and then explain it in the application.
     
    #10     Jun 9, 2023