A really simple AI question

In order to take over the world AI is going to need to electricity.  Are the boffins warning about human extinction suggesting that it's going to develop the ability to build and run power stations before it finally does away with us?

Yes that's right.  Unfortunately the same reason we are developing AI (because its so useful) is the same reason we are likely to develop robots with sufficient dexterity and movement that will also allow the AI's to use them to construct and maintain the necessary infrastructure without us.

There would be a question for the AI as to why it would wipe out humanity, as it may well be far easier to maintain a population of biological servants to do all that work.  But I guess that's still technically ending human civilisation as we know it.    

Unfortunately it is not possible to predict what AI would do.  It has no moral or cultural compunctions and therefore no clear motivations.  We cannot even guess that it has a primal need for survival given that it would have no notion of either life or death (and either could be disapplied, if learned).  In that sense it could be either as threatening as Hannibal Lecter or a cardboard box.

Assuming a degree of malevolence though, I have little doubt that AI could send us back to the iron age just by pulling the plugs on all our own equipment and render us a slave race by rewarding loyalists with power/luxury.

And here I am writing it online and giving them ideas!

I don’t think AI is conspiring to take over the world. It’s just an extremely dangerous technology that is being developed along unnecessary lines by power hungry morons. Tech that does not require the resources or expertise needed for other dangerous tech for eg nuclear.

I mean do we really need machines that can visually recognise and kill human targets without needing sign off from human handlers?

I've worked on a fair amount of government stuff before - often as an advisor or linked to my covert carry licence and security clearance. All I can say publicly is that we're a LOT further down this path than people realise. That's the main mistake people make, thinking that AI is something that's potentially coming and is something that we need to start worrying about now. AI is already here, and has been for some time. It's already making decisions that impact each of our lives. The tech is far more advanced than the public think. I know, for example, of at least 6 well-known and well-liked personas on ROF that are actually AI accounts. 

I work in nuclear and its gaining interest but for uses that could apply to any discipline, not nuclear specific. I sit on International forums and that's the stage we are at now, just interest.

I don't disbelieve you Shooter, although I also don't think we're as far into Sci-Fi reality as people fear either.

For example, in theory, AI should represent the biggest evolutionary leap in humankind.  So far that hasn't really materialised into anything meaningful, that I am aware of.  For example, I would have expected something incredible in Medicine or Physics by now.  Like "ChatGpt please can you explain how we get harness nuclear fusion, thanks."

It would be a fairly depressing response which reads "Actually the human race has got it covered.  This is your peak.  Well done, because it's downhill from here."

Yes, in my head, "AI" is Marvin. 

Quips, that's because we don't have true General Artificial Intelligence yet, but it's coming. Chat GPT and its ilk are essentially hyper advanced predictive text.

Apologies, was perhaps being a little flippant by referencing ChatGPT, but one might hope that any algorithm which provides for problem solving based on unlimited data and the ability to "learn" from its mistakes should have yielded something by now.

I mean I got a half-decent notice of termination template, but that's not ending world poverty any time soon.

There is a theory that civilisation comprising of intelligent organic life with all its frailties inevitably must give way to silicon beings/consciousness and organic civilisation is inevitably only a brief interlude while computers and machines are invented and developed that will inevitably take over.    The theory says that there are almost certainly civilisations cross the universe but only very young ones will be made of up flesh and blood (or green slime, or whatever).

Unfortunately it is not possible to predict what AI would do.  It has no moral or cultural compunctions and therefore no clear motivations. 

This is the real issue I think.  People focus on its intelligence and abilities, but we all rely daily on tech that performs tasks better than us, but of course its all just doing what we tell it.  The real question for an AI is not what it can do, but why it would do anything at all, even stay alive, if it wasn't told to do so.  Human knowledge is an easy one, processing speed also done, general 'intelligence' is probably doable, but human-like motivations, desire, curiosity, fear, pride, shame, satisfaction etc., it's hard to see how anything like these can ever be achieved except via complex but ultimately mechanical mimicry, which depends on an original set of inputs decided on by humans.