📱

Read on Your E-Reader

Thousands of readers get articles like this delivered straight to their Kindle or Boox. New articles arrive automatically.

Learn More

This is a preview. The full article is published at feedle.world.

The Midas Machine

The Midas Machine

By Katalin Balogfeedle | Top Stories

by Katalin Balog In a recent bestseller , Eliezer Yudkowsky and Nate Soares argue that artificial superintelligence (ASI), if it is ever built, will wipe out humanity. Unsurprisingly, this idea has gotten a lot of attention. People want to know if humanity has finally gotten around to producing an accurate prophecy of impending doom. The argument is based on an ASI that pursues its goals without limit-no satiation, no rest, no stepping back to ask what for ? It seems like a creature out of an ancient myth. But it might be real. I will consider ASI in the light of two stories about the ancient king Midas of Phrygia. But first, let’s see the argument. What is an ASI? An ASI is supposed to be a machine that can perform all human cognitive tasks better than humans. The usual understanding of this leaves out a vast swath of “cognitive tasks” that we humans perform: think of experiencing the world in all its glory and misery. Reflecting on this experience, attending to it, appreciating it, and expressing it are some of our most important “cognitive tasks”. These are not likely, to use an understatement, to be found in an AI. Not just because AI consciousness is rather implausible to ever emerge, but also because, even if AI were to become conscious, it would not do these things, not if its developers stuck to the goal of creating helpful assistants for humans. They are designed to be our servants, not autonomous agents who resonate with and appreciate the world. OK, but what about other, more purely intellectual tasks? LLMs are already very competent in text generation, math, and scientific reasoning, as well as many other areas. While doing all those things, LLMs also behave as if they are following goals. So are they similar to us, after all, in that they know many things and are able to work toward goals in the world? Humans know things by representing the world to be in a certain way, say, by thinking that the Earth is round. We do things based on our beliefs and desires. If I find elephants desirable, and believe that there is one in the next room, I will be motivated by that felt desire to check out the next room. The prevailing view about current LLMs is that they do not possess genuine beliefs and desires in the original sense of these words, in which humans do. Current LLMs do not represent the world as they are not connected to it in the way humans are, who use their senses and bodies to orient themselves in it. What they do very well is predict the next token in a conversation based on the prompt and their post-training instructions. However, while they lack genuine beliefs and desires, LLMs exhibit what David Chalmers calls “quasi-beliefs” and “quasi-desires”: that is, their behavior can be interpreted as having beliefs and desires. Their apparently goal-directed behavior emerges from trained features of their neural...

Preview: ~500 words

Continue reading at Feedle

Read Full Article

More from feedle | Top Stories

Subscribe to get new articles from this feed on your e-reader.

View feed

This preview is provided for discovery purposes. Read the full article at feedle.world. LibSpace is not affiliated with Feedle.

The Midas Machine | Read on Kindle | LibSpace