More About Artificial Intelligence
I have to admit it, artificial intelligence scares me. It is being touted, in some circles, as the greatest thing since sliced bread. It isn’t because I am worried about a terminator event where it will build robots and decide to use them to attack us, but I haven’t written off that possibility. I not only worry about its misuse, but the fact it doesn’t really realize anything it would be doing. Over the years I have seen a lot of problems with computer software and even Windows has problems at times after being in use since 1985. That is almost forty years and it still is not perfect, but you see artificial intelligence needs to be perfect for some of the things it is suggested to be used for.
There are certain situations in life where a wrong word or suggestion can be the basis for a war. There is this push going on for weapons which will be controlled by artificial intelligence and these weapons will make the decision of killing people or not. Some things just should not be based on an algorithm and this is one of them. It makes one wonder what criteria will be used. I am sure certain countries won’t take into consideration the deaths of innocent people at the site for example. Too many of them die now even using conventional weapons.
Nothing ever perfect has been produced by mankind and I believe nothing ever will be. Artificial intelligence is also being pushed to build robots on its own. This reminds me of the movie I Robot. I would like to suggest if we ever allow this, we should be sure we monitor the process closely and keep an eye on the entire process. Maybe there won’t ever be killer robots being built, but still if one is built with a bug in its “brain” who knows what it might do. It used to be we only had to worry about software affecting our computers, but things have changed.
Think about this. We are building devices which can go out into the world and in some cases do it on their own. Sometimes these devices are used to fly, travel, or swim prepared courses to explore and when they are finished, they return home with data. I just saw a robotic device which cost over 3/4ths of a million dollars on Expedition Unknown with Josh Gates. It was on a boat, then lowered into the water and left to do its charting for a few hours on it own and then returned home. What about when we do this with some sort of smart drone containing a bomb and allow it to make the decision of whether to blow up something or return home? It seems so wrong to allow killing by the decision of a machine.
I think there is always the danger with weapons controlled by artificial intelligence of them being hacked and turned on the attacker. We have to remember they are really a computer with some sort of transportation method attached. One horror scenario is we launch a bunch of artificial intelligence-controlled drone surge and it is turned against us. I don’t know how many remember when the Russians flew over a navy ship a couple of years ago and jammed all its communications and electronics?
Speaking of the Russians, they claimed to have a doomsday device buried which would activate if they were attacked with nuclear weapons. I believe this was said to be automatic, but I could be wrong, it was years ago.
Another thing I don’t like about artificial intelligence is it can be programmed to react in certain ways by the programmer, it is not free of bias. Google is overhauling its search results using more interaction with artificial intelligence. I tested it and requested a search of places with no UFO sightings and it brought up places with UFO sightings, so yes, it is not perfect. Now imagine artificial intelligence is being used in an operation and instead of making a certain cut, does the opposite. I wouldn’t want to see that. That is why I believe artificial intelligence should never be allowed to perform operations without supervision.
There is a long history of secrets being stolen and we have lost many. We have to realize that any device plugged into the internet is vulnerable to hacking no matter how well it is protected. What happens if we develop a super smart AI, only to lose it to the Chinese or Russian hackers? Not only do we have to be very careful of who we have working on an artificial intelligence project, but it should not have critical work online.
Could artificial intelligence ever gain the ability to think? Well, it has already done something unexpected. Two artificial intelligence software packages had to be shut down because they started talking to each other in a language they made up. This scared the heck out of a programmer and one programmer said he believed artificial intelligence has already created sentient programs. I don’t know about that yet, but I do know there is the Turing Test. It is said if a computer program can pass the test, it would be able to think and be sentient and one program has done it. The name of the program is said to be Eugene Goostman. It passed the test last year. So where does that leave us?
It looks like at least one artificial intelligence program can think and make decisions and an observer might believe it is dealing with a human being when it is really just a machine with smart software. It may be a primitive example of artificial intelligence, but I remember a machine which was used to build cars. The machine was programmed to shut down if a human got with so many feet of it. One day a worker got too close and the machine slammed him and killed him. The machine had only two jobs. One was to place a piece of sheet metal on a car and the other was not to kill the workers. It worked for a couple of years before a glitch prevented that second task.
Don’t get me wrong, I think there are many mundane tasks which don’t include the possibility of killing humans which artificial intelligence could be used for and space travel is definitely one of them. Meeting with extraterrestrials is not one of them however. Can you imagine when our robot is presented with a situation it hasn’t learned to handle? It could start a war or cause trouble in many other ways.