About Geoffery Hinton’s retirement announcement

In TLDR, I read the following about Geoffery Hinton’s retirement:

‘Godfather of AI’ quits Google with regrets and fears about his life’s work (2 minute read)

Geoffrey Hinton, one of the ‘Godfathers of AI’, has quit his job at Google to speak freely about the risks of AI. He says that a part of him regrets his life’s work. Hinton’s work directly led to the development of technologies like ChatGPT and Google Bard. He claims that competition with new AI technologies will be impossible to stop and result in a world with so much misinformation that people won’t be able to tell what is true anymore. The technology could also eliminate jobs, and possibly humanity itself, as it starts learning to write and run its own code.

I emailed first thoughts with Jack:

I’m thinking about this from a different perspective than Geoffrey Hinton. My thinking starts with, “Have people ever been able to tell what’s true?”

It is childlike to think historic media, including newspapers, carried nothing but “true” information. And, the information they have carried has always been curated and censored.

The current media is polarized on many issues…E, S, & G as in “ESG”…all polarized, appealing to the polar opposites, the noisy majorities at the opposite ends of the spectrum. Yet, most of us are the silent majority. Silent, but nonetheless the majority…not ideal clients for today’s media, who demand the “truth” as they want to see and hear it.

Jack responded
 

I agree – people should understand when to trust/not trust thing – that is the smart thing to do. Regardless, many people trust blindly – like consuming hand sanitizer to get rid of covid. You would think that even the label on the bottle would stop people, but who knows how many people died from that.

I guess things have to be limited to the scenario it is in – for example education, they could have specific AI’s that respond to specific questions. It could be all one AI but if you ask it math it understands to use the ‘math brain’ if it is history it uses the ‘history brain’ and so on. That way, if trained properly, miss-information should not be a part of the index.

But then, it just turns the news companies into the AI companies. There is always a filter and controller somewhere. Maybe that is a new concept…

more to follow on this topic…

Photo of Geoffery Hinton from https://www.cs.toronto.edu/~hinton/

Together our conversations can expand solutions and value

We look forward to helping you bring your ideas and solutions to life.
Share the Post:

Leave a Reply

Your email address will not be published. Required fields are marked *