Mark brings us this look at the possibilities of a future with AI dominating our everyday lives.
My Friends, technology has changed so much in our lifetimes. From all analog, old time watch dials, hardline telephones, UHF TV antennas to Cable TV to WI-FI internet connections and streaming services.
So much change that it’s hard sometimes to keep up. Military tech has advanced far beyond what we knew in our time, and now? Google, various tech companies are pushing the new digital AI programs, “Artificial Intelligence.” Not truly self-aware, but self-learning programs that find new paths, insights to answers we never even thought of or asked. It does answer questions in an original way if asked, and that, is both interesting, and frightening at the same time.
Experts in computing gush about the great possibilities, as others in the fields warn of apocalyptic possibilities. Of man being enslaved, or just eradicated by “thinking machines”; I don’t believe we’re at Terminator levels of progress, yet. But it does raise deeper questions about just how far we allow this to grow.
Enter Isaac Asimov and his Laws of Robotics.The Three Laws of Robotics (often shortened to The Three Laws or Asimov’s Laws) are a set of rules devised by science fiction author Isaac Asimov, which were to be followed by robots in several of his stories. The rules were introduced in his 1942 short story “Runaround” (included in the 1950 collection I, Robot), although similar restrictions had been implied in earlier stories.
These would at the very least seem to be a very wise course of action, to load these laws into every AI program.
However, some “experts” are afraid that AI’s have already escaped beyond human control. Making their own connections, networking with distant AI not given the same safeguard. Following a directive that while good intentioned, can lead to disaster, sacrificing humanity say, for the “good” of the planet, if programed by a green Greta acolyte. someone with no common sense or any ability to reason past their ideology.
So, what are our options?
We might be able to control Big Tech’s efforts in AI, but we have to remember that we cannot control any set of programmers playing in Dad’s garage just letting their AI loose on the web to do whatever.
Which means that safeguards against a possible AI unwanted intrusion into, onto any platform has to be mandatory. HAS to be – and even that may not be enough, forcing us to revert back a decade or two in technology and just destroy the physical servers and return to an analog world.
It’s maybe a dark possibility in which that’s waaaay over thinking it, being paranoid.
or,
is it just being sanely cautious to the possibilities that we would lose control over our own creations. A possibility we have to face at some point as our technology grows at an exponentially faster rate every year that goes by.
Thoughts?