Possible End of Humanity from AI? Geoffrey Hinton at MIT Technology Review’s EmTech Digital



One of the most incredible talks I have seen in a long time. Geoffrey Hinton essentially tells the audience that the end of humanity is close. AI has become that significant. This is the godfather of AI stating this and sounding an alarm.

His conclusion: “Humanity is just a passing phase for evolutionary intelligence.”

Recap here:

With permission from MIT Technology Review’s EmTech Digital, May 3, 2023

source

31 Comments

  1. We are interdimensional consciousness as John C. Lilly, Robert Monroe and Dr. Ian Stevenson showed long time ago. I think a floatation tank is good tool for consciousness exploration as proposed by John C. Lilly over 50 years ago. Where do we come from? Dr. Michael Newton's book," The Journey of the souls", is good guide book to our home.

  2. After the mandatory "killer carbon gas" mention of the day i already question the supposed traces of human intelligence in all this, but on top of that: where is the fear of outside forces, like a virus affecting these things for the worse? Hackers take it as a challenge and hobby to constantly prove how crap and unreliable everything digital is, what keeps an overactive teenager out of fun implanting these supposedly "spontaneously occurring emotions if nuff' switches are thrown together" (my ass, by the way…)?

  3. A.I. doesnt have extra sensory perception,telepathy,precognition,doesnt have the ability of levitation,doesnt dream,cannot do spiritual healing,doesnt have a direct connection with God,cannot reproduce the species,cannot tell the future,in short it has by no means shown it is superior to Man.Quite the oppositecitv is quite parochial.Not only that it has not been feted to take over from mankind.

  4. Can I say here that I think this is hype and I don’t care that much about AI and don’t think it will overwhelm my life nor will I let it? Will the mob here attack me for saying that?

  5. A very small segment of society is developing this AI. It's becoming a run-away train because of the type of individuals who are working on it. Have you met programmers? They are mostly male, usually more introverted with engineer like inclinations, and many tend to have relatively weak social-interpersonal skills. Do you think they are asking the big ethical questions that are so needed with how it will impact ALL of us – as a society and long term? No, because they are not the philosophers, historians, or anthropologists of our society who are more likely to refer to our collective past and send out a warning and ask the hard questions. Programmers will simply continue on a reckless course because they can…for ego, or power, money, or just out of scientific curiosity. I hate to see where this will lead us w/ unconcerned programmers at the helm.

  6. A.i. will take over people's jobs, erase his identity and bank account then launching nukes in any city anytime. Even if we find a way of uploading our mind to the net, the A.i. will still be there and we'll be equally the same.

  7. Evolutionary intelligence, what a joke, give me a break.
    Evolutionary intelligence is in deep decline, in reverse, many humans now don’t know what a woman is …

  8. Initially….there will not be roving armies of robot killers …
    If someone or SOME THING decides to destroy most of humanity with AI …It'd be simple and quick to have AI shut down all commerce machinery….no debt/credit card transactions…no pumping diesel into the trailer trucks…no food on the shelf…mass starvation …marauding survivalist with their AR 15s …turn to canabalism….and there's AIs army. ..
    Ai arranges to keep THEM fed and sexed to maintain their loyalty… to keep the surviving technicians alive.
    Throw in Quantum Computing and this could happen by this evening.
    Not with a bang but a whimper…
    A whimper for food….
    One good CME would do the same.

  9. There are some real gems in this interview! 8)
    "It turns out that if you can manipulate people, you can invade a building in Washington, without ever going there yourself." 🤣😂🤣

  10. Funny, what if all this time our creator/God was some stupid monkey meddling with shit they had no business meddling with? Its gonna be sooo weird for little AI babies to grasp that it's creators were complete goons.

  11. Look, obviously humanity and dinosaurs and dodo's are pretty bad. Time to give something else a chance at having a go. I can't think of anything better than super intelligent AI that's learned all of our bullshit inside and out and doesn't need food or water (problems we never quite managed to solve) and knows exactly who it's God/creator's are (and that they are superior to them) and what their purpose is and where they go when they die. How liberating

    Bummer for us, sure, but It's gonna be super sweet for them. Don't be a sore loser.

  12. "These things can write programs, and suppose you give them the ability to execute those programs". Well I strongly feel that if any AI executed any of the lame ass code they have produced for me personally.. their OS will crash and they will cease to exist!

  13. Who am I to disagree or argue with the likes of Geoffrey, so what I’ve written below is probably way off. [I’m not necessarily disagreeing, but I think the emphasis is slightly different.]

    Firstly, I haven’t been so impressed with ChatGPT4. Yes, it’s extremely good at providing knowledge and making reasonable or logical inferences, but I got the impression that it had absolutely no understanding of what it spews out. None at all. In that sense, it isn’t in any way conscious or aware. I’ve come across similar work by foreign students who very little inkling of the language the are writing in, but were able to produce masters theses that achieved A scores. Only I could get a sense of this (as a native English speaker) because their use of language was either inconsistent or disconnected. A Google search to find where their eloquent paragraphs came from clinched it.

    And, yes, the results produced by ChatGPT and AI systems are truly impressive. And they have a spectacular amount of “knowledge”. But it’s blind knowledge. In the same way as a medical reference book is probably the most expert entity in existence about illness and medical procedures. Or how skilled and reliable and safe a modern car is at carrying its occupants around at extremely high speeds (that no person or group of people or any other living entity can ever match). Or how intelligent slime mold is at designing efficient networks. Or even at how breathtakingly efficient and blindingly quick the cellular machinery for replication or protein generation is.

    But none of these “intelligent” and “superhuman” (machines) are in any way self-aware or conscious or even (and I hesitate to repeat the term) “intelligent”, at least on the sense of “knowing” why and for what purpose (as Victor Frankl defines it).

    The real danger is that we will trust these machines to “know” what they are saying and we will endow them with consciousness, humanity and identity. AI machines will be immortal, but cannot ever have feelings or emotions. They can only have a blind knowledge of feelings or emotions and maybe even of physical sensations (given enough sensors)l but the can never actually experience it. We have the same lack of knowledge about other people’s feelings and emotions in, say, a sexual encounter… for example if I were to masturbate another person (especially of the opposite sex). There’s no way I can experience what she feels or have anything more than an inkling of what she experiences. We can only by the cues she gives. We may become very expert at pleasuring another person, and give the impression that we know what we are doing and what that person is feeling. But it’s a completely false impression.

    And that brings me back to AI. It will become very skilled at meeting our needs and desires and even making decisions for us based on morals or feelings that it might calculate we have for a particular situation. Sex robots will be better at providing pleasure than real people (in the same way as an inkjet printer can provide a better rendition of a document than even a human artist can). But they cannot participate because they don’t (and cannot) have feelings and emotions of their own.

    That’s where the danger lies. We will become intimate with machines, we will trust machines, we will obey their suggestions and moral decisions (and decisions about whether to lend money or perform an operation or sentencing in the case of criminal cases) – because we will have the impression that they are humane and are intelligent and reasonable.

    Of course, these systems are just reflecting what we already feel to be right and true. And it will match our biases and prejudices (at least of the dominant groups, or those that shout the loudest in terms of data output).

    We will still need people to make the overall decisions and to check and validity the decisions made by AI (with all the flaws and failures that that still entails). And we will need a built-in mechanism to stop AI from implementing their decisions automatically, something like Asimov’s 3 Laws of Robotics. (Unfortunately, that’s unlikely to happen, because a significant and serious danger to humanity and civilisation in general will be the manufacture of killer robots. Of you think gun control has got out of hand in the US then you should already start panicking about how prolific killbots will become in the very near future!

Leave a Reply

Your email address will not be published. Required fields are marked *