Is Artifical Intelligence useful?

Discussion in 'Politics' started by zdreg, Apr 21, 2024.

is Artifical Intelligence usefu/

  1. Yes

    4 vote(s)
    66.7%
  2. No

    0 vote(s)
    0.0%
  3. don't know

    2 vote(s)
    33.3%
  4. don't care

    0 vote(s)
    0.0%
  1. zdreg

    zdreg




    How to Picture A.I.
    To understand its strengths and limitations, we may need to adopt a new perspective.

    By Jaron Lanier

    March 1, 2024
    Illustrations by Arif Qazi
    Atechnology by itself is never enough. In order for it to be of use, it needs to be accompanied by other elements, such as popular understanding, good habits, and acceptance of shared responsibility for its consequences. Without that kind of societal halo, technologies tend to be used ineffectively or incompletely. A good example of this might be the mRNA vaccines created during the covid epidemic. They were an amazing medical achievement—and yet, because of widespread incomprehension, they didn’t land as well as they might have. It might not even be proper to call a technology a technology absent the elements needed to bring it usefully into the human world; if we can’t understand how a technology works, we risk succumbing to magical thinking.

    Another way of saying this is that we need cartoons in our heads about how technologies work. I don’t know enough about vaccines to make one for myself, but I have a vaccine cartoon, and it gives me an approximate understanding; it’s good enough to help me follow news about vaccines, and grasp the development process, the risks, and the likely future of the technology. I have similar cartoons in my head about rockets, financial regulation, and nuclear power. They aren’t perfect, but they give me good-enough intuitions. Even experts use cartoons to talk to one another: sometimes a simplified view of things helps them see the forest for the trees.

    On this point, I experience some tension with many in my community of computer scientists. I believe that the cartoons we have broadcast about A.I. are counterproductive. We have brought artificial intelligence into the world accompanied by ideas that are unhelpful and befuddling. The worst of it is probably the sense of human obsolescence and doom that many of us convey. I have trouble understanding why some of my colleagues say that what they are doing might lead to human extinction, and yet argue that it is still worth doing. It is hard to comprehend this way of talking without wondering whether A.I. is becoming a new kind of religion.

    In addition to the apocalyptic atmosphere, we don’t do a good job of explaining what the stuff is and how it works. Most non-technical people can comprehend a thorny abstraction better once it’s been broken into concrete pieces you can tell stories about, but that can be a hard sell in the computer-science world. We usually prefer to treat A.I. systems as giant impenetrable continuities. Perhaps, to some degree, there’s a resistance to demystifying what we do because we want to approach it mystically. The usual terminology, starting with the phrase “artificial intelligence” itself, is all about the idea that we are making new creatures instead of new tools. This notion is furthered by biological terms like “neurons” and “neural networks,” and by anthropomorphizing ones like “learning” or “training,” which computer scientists use all the time. It’s also a problem that “A.I.” has no fixed definition. It’s always possible to dismiss any specific commentary about A.I. for not addressing some other potential definition of it. The lack of mooring for the term coincides with a metaphysical sensibility according to which the human framework will soon be transcended.

    Is there a way to explain A.I. that isn’t in terms suggesting human obsolescence or replacement? If we can talk about our technology in a different way, maybe a better path to bringing it into society will appear. In “There Is No A.I.,” an earlier essay I wrote for this magazine, I discussed reconsidering large-model A.I. as a form of human collaboration instead of as a new creature on the scene. In this piece, I hope to explain how such A.I. works in a way that floats above the often mystifying technical details and instead emphasizes how the technology modifies—and depends on—human input. This isn’t a primer in computer science but a story about cute objects in time and space that serve as metaphors for how we have learned to manipulate information in new ways. I find that most people cannot follow the usual stories about how A.I. works as well as they can follow stories about other technologies. I hope the alternative I present here will be of use.

    We can draw our human-centered cartoon about large-model A.I. in four steps. Each step is simple. But they’ll add up to something easy to picture—and to use as a tool for thinking.

    I. Trees
    The very first step, and in some sense the simplest one, might also be the hardest to explain. We can start with a question: How can you use a computer to find out whether a photograph shows a cat or a dog? The problem is that cats and dogs look broadly similar. Both have eyes and snouts, tails and paws, four legs and fur. It’s easy for a computer to take measurements of an image—to determine whether it’s light or dark, or more blue or red. But those kinds of measurements won’t distinguish a cat from a dog. We can ask the same type of question about other examples. For instance, how can a program analyze whether a passage is likely to have been written by William Shakespeare?



    On a technical level, the basic answer is a glommed-together tangle of statistics which we call a neural network. But the first thing to understand about this answer is that we are dealing with a technology of complexity. The neural network, the most basic entry point into A.I., is like a folk technology. When researchers say that an A.I. has “emergent properties”—and we say that a lot—it’s another way of saying that we didn’t know what the network would do until we tried building it. A.I. isn’t the only field that’s like this; medicine and economics are similar. In such fields, we try things, and try again, and find techniques that work better. We don’t start with a master theory and then use it to calculate an ideal outcome. All the same, we can work with complexity, even if we can’t predict it perfectly.


    Let’s try thinking, in a fanciful way, about distinguishing a picture of a cat from one of a dog. Digital images are made of pixels, and we need to do something to get beyond just a list of them. One approach is to lay a grid over the picture that measures something a little more than mere color. For example, we could start by measuring the degree to which colors change in each grid square—now we have a number in each square that might represent the prominence of sharp edges in that patch of the image. A single layer of such measurements still won’t distinguish cats from dogs. But we can lay down a second grid over the first, measuring something about the first grid, and then another, and another. We can build a tower of layers, the bottommost measuring patches of the image, and each subsequent layer measuring the layer beneath it. This basic idea has been around for half a century, but only recently have we found the right tweaks to get it to work well. No one really knows whether there might be a better way still.

    Here I will make our cartoon almost like an illustration in a children’s book. You can think of a tall structure of these grids as a great tree trunk growing out of the image. (The trunk is probably rectangular instead of round, since most pictures are rectangular.) Inside the tree, each little square on each grid is adorned with a number. Picture yourself climbing the tree and looking inside with an X-ray as you ascend: numbers that you find at the highest reaches depend on numbers lower down.

    Alas, what we have so far still won’t be able to tell cats from dogs. But now we can start “training” our tree. (As you know, I dislike the anthropomorphic term “training,” but we’ll let it go.) Imagine that the bottom of our tree is flat, and that you can slide pictures under it. Now take a collection of cat and dog pictures that are clearly and correctly labelled “cat” and “dog,” and slide them, one by one, beneath its lowest layer. Measurements will cascade upward toward the top layer of the tree—the canopy layer, if you like, which might be seen by people in helicopters. At first, the results displayed by the canopy won’t be coherent. But we can dive into the tree—with a magic laser, let’s say—to adjust the numbers in its various layers to get a better result. We can boost the numbers that turn out to be most helpful in distinguishing cats from dogs. The process is not straightforward, since changing a number on one layer might cause a ripple of changes on other layers. Eventually, if we succeed, the numbers on the leaves of the canopy will all be ones when there’s a dog in the photo, and they will all be twos when there’s a cat.


    Now, amazingly, we have created a tool—a trained tree—that distinguishes cats from dogs. Computer scientists call the grid elements found at each level “neurons,” in order to suggest a connection with biological brains, but the similarity is limited. While biological neurons are sometimes organized in “layers,” such as in the cortex, they are not always; in fact, there are fewer layers in the cortex than in an artificial neural network. With A.I., however, it’s turned out that adding a lot of layers vastly improves performance, which is why you see the term “deep” so often, as in “deep learning”—it means a lot of layers.

    ADVERTISEMENT
    It’s impossible to find a perfect metaphor. In writing this piece, I played around with towers and other tall things instead of trees—but the organic and familiar nature of trees, and the way they grow to reflect events around them, connecting into a great whole through mycelium, communicates well with many nontechnical people. There’s a pothole in this metaphor, because “tree” is also one of the most common terms in computer science, referring to a branching abstract structure. These trees are not those trees.

    How do our trees get trained? The core technique in training is based on a trick called “gradient descent,” which dates back at least to 1847, when the mathematician Augustin-Louis Cauchy described it. The basic idea is to make a series of ever-better guesses about which numbers on which levels of the tree should become more influential. The challenge is that, as soon as one number starts to prove itself, it risks becoming too prominent, at the expense of other beneficial numbers. The method that works requires balance in order to find a fortuitous but unpredictable combination of numbers that deserve greater prominence.

    Getting that balance right involves what can be thought of as compensations to avoid the familiar phenomena of virality, in which something becomes ever more famous or valuable—to an extreme—simply because it started to be so at the right moment. It’s an interesting coincidence, because the larger human world is being transformed by virality, and not in a good way. I am bringing this up in the hope of piquing the interest of some of my colleagues in the tech industry. We use A.I. algorithms to direct social-media flows to and between people, to run finance, and much more. In all these cases, virality has become expected. But, within A.I., we have to do something like repressing virality to get the stuff to work. Shouldn’t we consider doing something similar when it comes to the deployment of A.I.? Wouldn’t that make culture, politics, and economics a little less crazy?

    II. The Magic Forest
    Cats and dogs are easy to picture, but the same principles apply to text, computer code, music, movies, and anything else. In theory, we can take the whole Internet, and any other data that we can get our hands on, and build trees trained to label it correctly. We can build a magic forest of such trees capable of recognizing just about anything in digital form.


    There’s a catch, however. It’s possible to gather accurately labelled photos of cats, dogs, and much else. But most of the information produced by humanity hasn’t been labelled so cleanly and consistently, and perhaps can’t be. What we need is an approach that approximates a system of near-universal labels. The property we know about that allows us to do this is proximity. Suppose that, on the Internet, a certain sequence of text tends to be situated near a certain kind of picture. This suggests that the text and the image are related.

    What would it look like if we used proximity to estimate how everything online is connected to everything else? In other words, what if we used proximity as a proxy for semantics? You might imagine a vast expanse of trees coming out of this kind of association, stretching into the distance, connected perhaps by clumping or an underground mycelial web—a great forest of mutual classification.

    The word “cat” often occurs near images of that creature on the Internet, but trees in our metaphorical forest gain a sense of broader association. A tree capable of identifying an image of a cat might have roots connecting it to other trees that identify evocations of cat memes or cat beds. It might be connected to trees for toys adored by cats, as well as ones capable of recognizing their diseases and human fans. Researchers are striving to create more “multimodal” A.I. models, meaning that images, text, and movies can be related in a single tool. This makes A.I. into a sort of concordance of how humanity has noted connections between diverse things—at least inasmuch as those things have made it into the training data. Elsewhere in such a forest, trees might be devoted to reggaetón music, or to code that runs Web sites for comic-book fans, or to radiological images of tumors in lungs. A large enough forest can in theory classify just about anything that is represented in digital form, given enough examples of that thing.

    Growing a big forest is a stupendous undertaking. It takes a long time, and involves astonishing resources. When the number next to a “GPT” goes up—from 3 to 4, say—that marks, among other things, a new “training cycle,” in which a new forest is grown, capable of recognizing more things with greater reliability. How big are these forests? What things will be included? We don’t know in advance.

    The trees are not explicit; we don’t get a list of them. They are implicit, existing in a great smoosh. In our cartoon, they are spread out in space because people are creatures of time and space. At any rate, there are billions upon billions of potential trees, if you are willing to see trees. The size of the over-all result is hard to convey.

    III. Forest Products
    Imagine that you have a cat-identifying tree, but no images of a cat. This is where what’s known as “generative” A.I. comes in.

    ADVERTISEMENT
    It’s possible to work in reverse, using the cat-recognizing tree to create an image of a cat. Start with a field of random pixels—a rectangle of snow. Now slide your rectangle under a cat-recognizing tree and see if it discerns a cat. It won’t, because the image is only random pixels. But now randomize the random pixels. The result still looks like snow to you, but, in the tree, it might stir a faint recognition. If you see the tree’s recognition increasing, keep the result. If it decreases, throw the change away. Now do this over and over. Out of the snow, a cat eventually emerges.

    (Just so you know, the usual explanation is a little different from the one given here, and involves repeatedly removing and adding noise to the image, but that’s just a more tedious way of describing the same process. The reason I talk about throwing out images even though we don’t actually delete files is that many people can follow a description of an action like throwing something out better than the abstract idea of adding and removing noise.)


    What cat have you generated? It’s not a real cat. It will be a random interpolation of some of the cats that contributed to the training of the cat-recognizing tree, way back at the beginning of our journey. That data came from cat images that were put online, but the new image typically isn’t exactly any one of them. It’s just an image that is recognizably a cat. Run through the process again and you’ll get a new concatenation. Train an A.I. to produce such images—not just for cats but for many things—and you’ve got a basic form of generative artificial intelligence.

    IV. Phantom Trees
    We’re now ready to understand, in a metaphorical way, what’s going on when we interact with generative-A.I. systems. We engage with such systems using prompts—combinations of words that describe what we want. The words in our prompts are recognized by trees across the forest. You might imagine them lighting up in response to our prompts. But the activation of individual trees isn’t as important as what happens between them.

    Suppose that you ask a generative-A.I. system to produce “a watercolor of a cat in a parachute, playing a tuba, about to land in Yosemite.” There are no individual trees in the forest that match this particular request, because there have never been pictures on the Internet associated with this specific collection of words. In a sense, there’s an open meadow between the trees for “watercolor,” “cat,” “parachute,” “tuba,” and “Yosemite.” But by drawing on the trees for those concepts, and producing something that is recognizable to all of them, an A.I. can fill in the gap. It can conjure a phantom tree in the clearing—one custom-made in response to your prompt.

    “A watercolor of a cat in a parachute, playing a tuba, about to land in Yosemite” is an interesting tree. Conjuring it seems to require something like creativity. How exactly does a cat fit into a parachute harness? How exactly does it hold a tuba? These are questions without definitive answers. Yet, in drawing on all the trees activated by the prompt, an A.I. will find a solution. It’s a statistical process—a search for a way to be more than one thing at a time. You can imagine the A.I. program circling through all the trees that match the prompt and only choosing an emerging image that is a match for all of them at once. On the Internet, images of parachutes being used generally show people, not cats. But getting a cat into a posture similar to a person’s is more likely to satisfy the parachute-in-use tree. The process results in something plausible. Its output isn’t perfect, but it’s often good enough for serious uses, or at least to be cute.

    Certain kinds of outputs can be economically valuable. For example, the forest might be able to recognize summaries of documents that are proximate to longer versions—think of the executive summary of a lengthy report, published alongside it—and then, in response to a prompt, conjure summaries of documents it’s never before seen. How can it do this? It’s worth noting that words and images are different, and that the A.I. technologies that work with them are usually different, too: text is usually generated by what’s known as a large language model, whereas images are usually generated by a diffusion model. But the processes are broadly similar. Words tend to occur near other words; in this way, proximity between words is similar to proximity between words and pictures. And patterns of proximity between words can be telling. With enough examples, a metaphorical forest of word-recognizing trees should inherently be able to recognize whether a sequence of words is grammatical, and which author is likely to have written it. Does the passage sound like Jane Austen? Or like pirate-speak? Or like it came from a thirteen-year-old?

    Generating text works a little differently than generating images because, for one thing, text is a sequence of words. Instead of settling on a single desired image, an A.I. that works with words must choose the next word many times, over and over, in relation not only to your prompt but to previous words chosen. There are plenty of other differences. Even so, when a generative-A.I. model is choosing the next word, you can think of that word as a single point of uncertainty—of noise—being resolved into a word choice, as if it were a very tiny and simple image. This is why a broadly similar process to the one that produces the surreal cat image can be used to generate a summary of any given document. The A.I.’s training data contained many examples of summaries and their sources; the A.I. process generates a stretch of text that can be recognized as a summary because it metaphorically triggers the trees that originally came to recognize summaries, even though the topics being summarized are not the same as in the training. (No one knew in advance how remarkably well this would work.)

    ADVERTISEMENT
    To non-coders, the fact that A.I.s can produce code might seem astonishing. But computer programs are a type of text, and training data are plentiful. Coding is often egregiously tedious, because writing a program involves many annoying details that you have to deal with before you can even start to address your ultimate goals. But coders have already created many millions of programs that address these types of details, with slight variations in each case, and posted the code online. Just as a cat’s body can be adjusted to fit in a parachute harness, these preëxisting programs can be slightly altered by generative A.I. to fit the specifics of a prompt. By some estimates, generative A.I. can increase the productivity of programmers by twenty to thirty per cent or more.

    If text is a one-dimensional string of words, and images are two-dimensional grids of pixels, then videos are three-dimensional, because they extend in time. But the same principles that allow for the generation of text and images work for videos, too. Recently, OpenAI announced Sora, a generative-video system that can create realistic video clips from text prompts. In the physical world, moviemaking often requires a continuity person—someone who makes sure that props, hairdos, and the angle of the sun don’t suddenly change from one moment to the next. Continuity is profound because it is what makes reality consistent, and, in a sense, real; it’s important that a thing still looks like itself even if it goes out of frame and comes back. Until now, continuity errors have prevented A.I. from making convincing videos. If a generative-image system tries to produce the frames of a movie, those frames end up disconnected, with details that don’t match up as time passes.


    Sora approximates continuity using a simple principle. Remember how the lowest layer of a tree might be a grid that measures simple qualities, like the color or sharpness of a patch of a still image? Sora uses a different basic idea to capture continuity. In a movie, a particular patch of image in one frame will generally show up in the next frame, too, though it usually will have changed a little; for instance, a glint in a cat’s eye will probably endure for more than one frame, but it won’t stay in exactly the same spot. It isn’t too hard to calculate how little elements of a patch, like a highlight in a cat’s eye, move from one position to the next in a series of frames. Once you analyze a movie like this, you get sweeps of portions of images that continue from frame to frame, through time. (What I am calling sweeps are called “patches” in most of the academic literature, but since patches just sit there and sweeps move, I prefer to use sweeps here.)

    One way to understand this idea might be by means of a passage from Kurt Vonnegut’s novel “Slaughterhouse-Five,” describing how the world looks to aliens called Tralfamadorians:

    The Universe does not look like a lot of bright little dots to the creatures from Tralfamadore. The creatures can see where each star has been and where it is going, so that the heavens are filled with rarefied, luminous spaghetti. And Tralfamadorians don’t see human beings as two-legged creatures, either. They see them as great millipedes with babies’ legs at one end and old people’s legs at the other.

    For Sora, these sweeps are the fundamental building blocks, instead of the pixels that make up still images. Combinations of sweeps of visual features through time are associated with text descriptions, and, by entering text prompts, users can recombine these sweeps in new ways. The process of generating movies with this kind of model naturally captures much of how things move in reality, including how perspectives change when the camera moves, and how bodies sway. It can cause a bite to remain missing after a fake person eats part of a fake burger. At the same time, we can start to understand why hands might be more likely to get distorted by the process. Hands have their own internal structure and principles of motion, which means that combining the way streaks tend to flow for a whole scene can easily miss what’s going on in the microworld of a hand.

    The Tree Line
    Will this kind of technology put set designers and coders out of work? I don’t think so. In the case of coders, there’s a huge demand for new and updated programs. But there’s also a case to be made that A.I. can’t always do the whole of a creative person’s job. Why not?

    Answering that question requires thinking about both the powers and the limitations of the technology. When you ask an A.I. to conjure a new phantom tree in response to a prompt, there is a certain kind of new thing brought into the world. This involves a sort of creativity. Our cartoon, however, strongly suggests that it’s creativity with a ceiling. It fills in the spaces between the trees, but does not climb above them.

    Are human minds limited in this way? Or can we think our way above the tree line of what we know? Can an A.I. get above its own tree line? Researchers disagree about these important questions; right now, too little is known about both human and artificial processes to say much for sure. In practice, though, we must make assumptions about people and machines as we bring machines into the human world. I believe that we are better off assuming that people can reach higher than an A.I.’s metaphorical trees. That assumption will help us avoid the trap of choosing a diminished ceiling for civilization. One of the dangers of A.I. is that we might start to act as if everything that can be done in the future is similar enough to what’s been done in the past that A.I. can do it all. I believe that we should resist this assumption.

    ADVERTISEMENT

    One problem with the usual anthropomorphic narratives about A.I. is that they don’t nurture our intuitions about its weaknesses. As a result, our discussions about the technology tend to involve confrontations between extremes: there are enthusiasts who think that we’re building a cosmically big brain that will solve all our problems or wipe us out, and skeptics who don’t see much value in A.I. The skeptics often concentrate only on our third step—working in reverse to generate new content that can be recognized by our metaphorical trees. Focussing on this step, the computational linguist Emily Bender and her colleagues have described certain models as “stochastic parrots” that use statistics to recycle existing information. Similarly, in an essay for this magazine, Ted Chiang argued that generative A.I. creates only a “blurry” regurgitation of the data on which it was trained.

    I have great respect for this work, and I agree with it to a certain degree. But what it fails to take into account is our fourth step, in which a new tree is conjured in our metaphorical forest. In conjuring these trees, generative A.I. makes previously implicit correspondences in training data explicit. There is no way to list the many potential combinations in advance, and so we can think of this process as creative. But we can also see its limits.


    I find the image of a new tree reaching up toward, but not typically above, a canopy altitude defined by other trees to be a useful and balanced one. It offers an alternative to the view that A.I. does nothing but regurgitate—but it also communicates skepticism about whether A.I. will become a transcendent, unlimited form of intelligence. Filling the spaces between the trees is great, but it shouldn’t be confused with raising the ceiling. What it does do is maximize the value of training data. That, in itself, is a great enough reason to be enthusiastic about the latest A.I. advances.

    The Value of the Forest
    Big-model A.I. leverages existing data. Its forests grow by drawing on the similarity in repetitive tasks undertaken by people in the past. Through A.I., we recapture some of the value they created. This is true for writing programs, summarizing documents, creating lessons, drawing cat pictures, and so on. We can think of A.I. as a method for making the past more present in today’s human effort, bulking it up.

    Leveraging human effort in new, better ways is the very definition of economic value. This is a good starting principle for managers, investors, and customers to keep in mind. If you want to understand where generative A.I. will bring the most value, ask yourself: Which human activities have been done many times before but not in exactly the same way? In those areas, generative A.I. can probably make the situation better.

    Navigating the Forest
    People have legitimate concerns about A.I.—concerns about safety, quality, fairness, and economic displacement. Our cartoon can also help us understand the work of those who are trying to address these concerns. Our cartoon, seen from a distance, makes a triptych: in the middle, between the training data on one side and the generative outputs on the other, there extends a vast, strange forest. The training data and the outputs are intelligible, meaning that people can understand them. The middle part—the forest—will remain, for the foreseeable future, mostly unintelligible.

    One step that researchers have taken, to positive effect, is to begin patrolling the intelligible parts—the prompts and outputs. Today’s A.I. systems include “guardrails” that blunt users who prompt them in ways their developers predict will be harmful. Effort also goes into depreciating the worst of the training data. By resisting criminal, phony, malicious, or biased training data, we can grow a healthier forest.

    Can we make the forest itself, the central panel, more navigable? In “There Is No A.I.,” I argued that we need to figure out how to trace the connections between specific inputs and outputs in big A.I. models. This would amount to laying trails of bread crumbs through the forest as it is grown. This is doable in theory, but not yet in practice; I believe it’s important that we figure out how to accomplish this. Suppose that a group of bad actors wants to enlist an A.I. to help them build a bomb. They might be clever enough to avoid using the word “bomb” in a prompt. They might even be able to ask for a disguised output—maybe there’s a way to coax an A.I. into crafting a cake recipe that happens to make an explosive. But their task would be much harder if there were bread crumbs in the forest. At some point, the trail would lead back to a bomb-related document in the training data.

    Some developers and users of artificial intelligence might object that I am underselling the technology, but I disagree. Being able to state the concrete, finite worth of something might do harm to a fantasy of infinite potential, but it ultimately gives us a more pithy and actionable perception of that thing’s value. Others will note that technology is always on the move; the version of A.I. I’ve described here may soon be replaced with something different, such that our cartoon will become outmoded. Maybe it will be. Papers are published every day by researchers trying to break new ground, and there is an unending stream of bright-eyed startups.

    Even so, the cartoon given here is a reasonable statement about what exists today. That’s part of its value. It captures the powers and limitations of A.I. technology as it is—not as some researchers might wish it to be. If the version of A.I. described here becomes obsolete—if the forest is burned down—then whoever shows up next will be obliged to create a viable new cartoon, along with all the rest of the human elements needed for the new technology to be a true technology. The science-fiction writer Arthur C. Clarke famously stated that a sufficiently advanced technology is indistinguishable from magic. But that is only true if that technology is not explained well enough. It is the responsibility of technologists to make sure their offerings are not taken as magic. ♦

    stop runaway A.I.?
     
  2. Who is going to read all of that? Can you post a TLDR version please.

    Saw a video of an AI ChatBot taking voice orders at a Burger drive through. That sounds like a useful use case.

    Plus shouldnt this thread be in Chitchat?
     
  3. Here is the video:

     
  4. Nobert

    Nobert

    This is an A.I song. Knew it in 30 secs.

    Made thousands of them myself, last summer.
    Majority is trash, but some are good.
    Would rate this one low tier tho.
    But,

    check the comments.

     
  5. wildchild

    wildchild

    There is a thing called ChatGPT 4 that can that a long article and summarize. For instance, if there were a long wordy article called "How to picture AI" it can provide a quick summary that might look like this.

    Jaron Lanier's essay, "How to Picture A.I.," delves into the misconceptions surrounding artificial intelligence and argues for a more nuanced understanding of its capabilities and limitations. Lanier criticizes the overly simplistic and sometimes misleading narratives that often accompany discussions about A.I., which can foster unrealistic expectations or fears about the technology. He advocates for conceptualizing A.I. not as a mysterious or omnipotent force but rather as a complex tool that enhances and relies on human input.

    Lanier uses the metaphor of "trees" to explain how A.I. processes information, illustrating it through a step-by-step explanation of how A.I. can distinguish between images of cats and dogs, among other tasks. This metaphor extends to a "magic forest" of such trees that can, in theory, recognize almost any digital data through training and adaptation. He emphasizes that while A.I. can generate new content, it does so based on existing data, which both limits its creativity and defines its utility.

    The essay also addresses the ethical and practical challenges in deploying A.I., suggesting that while A.I. can perform many tasks, it should not be seen as a replacement for human creativity or ethical judgment. Lanier proposes that understanding A.I. in a more grounded and realistic way can help integrate it more effectively into society, avoiding the pitfalls of treating it like magic or a threat to humanity.
     
  6. wildchild

    wildchild

    Edgar Allen Poe summarized it for me too

    "In the shadowed and misty realms of modern intellect, where the ghostly spectres of artificial intelligence haunt every corner, Mr. Jaron Lanier ventures deep into the thicket of common misapprehensions that enshroud these spectral machines. In his profound discourse, titled 'How to Picture A.I.,' Mr. Lanier wields his quill against the tempestuous and oft misleading tales spun around the ethereal capabilities and bounds of these mechanical minds. He implores the scholarly and the layman alike to eschew the lure of viewing these creations as omniscient deities or dark omens, proposing instead that we regard them as mere extensions of our own cerebral faculties, complex yet confined by the inputs of mankind.

    With a master’s touch, Lanier summons the allegory of the 'trees,' an arcane yet apt metaphor for the labyrinthine processes of A.I. This spectral forest, borne from the deepest chasms of computation, learns to distinguish the canine from the feline through a layered tapestry of discernment, each layer capturing and analyzing fragments of the visual enigma laid before it. As this great arboreal structure grows, so too does its capability to recognize and categorize the vast expanse of digital ether, learning from the shadows of data that we mortals cast before it.

    Yet, as he weaves this tapestry of technological transcendence, Lanier does not shy away from the moral quandaries and the earthly shackles that bind these spectral intellects. He muses on the responsibilities of those who wield such arcane knowledge, urging a stewardship that embraces both the creative spirit and ethical compass of humanity. For, though these phantasmal constructs can mimic the fruits of human genius, they are naught but shadows cast by our own hand, limited by the very data from which they draw their ghostly visage.

    Thus, with the poetic grace of a bygone era, Mr. Lanier guides us through the ethereal forest of artificial intelligence, entreating us to plant our feet firmly on the ground and to gaze steadfastly beyond the mists of awe and fear. In doing so, he offers us not only a map through the haunted woods of A.I. but a lantern to illuminate a path towards a future where these ghostly constructs serve not as masters or monsters, but as mirrors of our own making."

    And then Bill Shakespeare weighed in too

    "Upon the stage of modern thought, where shadows cast by artificial sprites do flicker and dance, our sage Jaron Lanier, with a scholar’s quill and a philosopher’s mind, doth challenge the murky tales spun round the mechanical beings that dwell within our midst. In his learned exposition, aptly titled 'How to Picture A.I.,' Master Lanier entreats us to forsake the fanciful notions that these automatons possess omnipotent power or portend doom, urging instead a vision more tempered, where these creations are naught but the offspring of our own intellectual endeavor, intricate yet constrained by the bounds of their creation.

    With the deftness of a poet, he conjures the metaphor of the 'trees,' a labyrinthine construct that serves to elucidate the enigmatic processes by which these mechanical minds discern the visage of a cat from that of a dog. This grand arbor, rooted deep in the fertile soil of computation, grows layer upon layer, each stratum a scholar in its own right, studying the patchwork of reality presented unto it. As this mighty tree reaches toward the heavens, it expands its dominion to comprehend and categorize the endless sea of digital artifacts, learning from the corpus of humanity’s own making.

    Yet, Lanier, in his wisdom, doth not neglect the weighty ethical quandaries and the terrestrial chains that bind these spectral intellects. He reflects on the mantle borne by those who command such arcane knowledge, advocating a guardianship that marries the spark of human creativity with the compass of moral righteousness. For though these ethereal engines may mimic the fruits of human ingenuity, they remain but shadows, their prowess circumscribed by the data from which they draw breath.

    Thus, in the grand tradition of the Bard, Lanier guides us through the mystical forest of artificial intelligence, imploring us to tread with care and foresight. He presents not merely a map through these enchanted woods but a beacon to light our passage towards a future wherein these creations, born of our own ingenuity, serve not as sovereigns or specters but as reflections of our collective will."
     
  7. UsualName

    UsualName

    Just so we are all aware of this what we call AI today is really a next word prediction model calculating off of super compressed chunks of the internet in zip files.

    It is technically artificial intelligence but nowhere near autonomous.

    Is it useful? Absolutely. The search and answer precision is much greater than google. I personally use gpt to build charts and tables rather often. It’s kind of a short cut of a pivot table.
     
    gwb-trading likes this.

  8. This happens all the time, technology improves and jobs get squeezed out as part of progress. People require more skills training to elevate from these low level manual labor jobs. Automation/improved efficiency is an ongoing thing. Minimum wage increases are not driving it, the desire for efficiency and cost savings over the long run will always push it.

    People do not realize that at a MCD the managers and upper level workers are not earning minimum wage so it only effects a handfull of people at each location which means there is some cost passed on but it will not result in $20 hamburgers.