Forumite Members › General Topics › Tech › Other Tech › AI
- This topic has 32 replies, 7 voices, and was last updated 7 years, 11 months ago by
Ed P.
-
AuthorPosts
-
April 16, 2018 at 7:37 pm #19738
I just read this on Tom’s Hardware:
“Another principle for AI code as established by the UK report is that AI should never have the autonomous power to “hurt, destroy or deceive human beings.” This also seems to be a contrary principle to that of the U.S. government, which is looking to build drones that will decide on their own when and who to kill.”
I’m not a programmer but every time I read or hear something like this I can’t help but think “WTF are they talking about”?
Some of you guys know a fair bit of programming. Do you really believe that AI stuff? To me it sounds like a complete nonsense. No matter how sophisticated a program is it’s still a program where people anticipated every eventuality. And if something hasn’t been anticipated it either crashes or freezes or does a default action it was programmed to do in such an event. Computers don’t make choices. And a random “choice” is not a choice.
How do you guys see this? Are you believers or non believers?
April 16, 2018 at 8:37 pm #19740Lets see if I can explain one of the most scary areas of AI — neural networks and why they are scary. I’ll probably fail, but I have warned you!
Imagine if you will a number of quite complex wiring blocks, but each has the same rough layouts with the inputs going in on A and the outputs on B. Lets imagine this is a sensor net in which the sensor inputs go from no input up to 100% with all the possible points in between. Lets also have the outputs run from undetected up to definitely detected with all the values in between. Easy so far so lets make the relationship non linear. In a military context this could be a PIR detector for an autonomous gun firing sentry.
Now we need to decide whether that was a leaf, rabbit or person, so feed in the output from the first layer into a second layer that detects the size of the object depending how many PIRs were simultaneously triggered and throw out (set to zero) all the little objects. Still easy and still deterministic.
Lets throw in a third vector and take that output and check in which direction the movement is going. Still easy and deterministic, so throw in the fog of war (some real fog), a PIR does not work very well in fog so results soddenly become vague and random. We so not want the sentry to go firing off its guns at random so we need to feed each of these results into a hidden layer that will mash them all together in some way that gives the results that best match practical experience. This puts a weighting on each of the various inputs and combines them to give an output, the weighting is found by training the neural net against many thousands of different atmospheric conditions and taking the results that fit our preconceptions of a most likely outcome. This was a simple example in a real world case there might be a fairly large number of hidden layers each with different weighting factors.
If you now try and figure out just why the ‘sentry’ fired its gun and killed your friend Joe you will get a ‘probably’ because of this and that, but the reality is that it is a ‘beats the crap out of me and, I can only give you a best guess as an answer’.
Its a bit like showing someone a shade of grey and asking if it is black or white. The question is inappropriate as the answer will be shaded by what do you mean by black or white, and all the external factors such as lighting etc.
Currently neural networks cannot give an understandable answer for why it comes up with a particular action. You have to trust that the programmer did not slide in a few biases (for example set that everything above the size of a small dog may be a viable target i.e small children are viable. You can also throw in biases during the classification of the many sets of training data.
In other words you have to have complete trust in a programmer, who if honest will still give a ‘beats the crap out of me’ response when asked ‘Why?’
I’ve horribly simplified my answer, a better response might be had by looking at this. (ignore the programming and concentrate on the pics and headings to possibly get a better over-view.)
April 16, 2018 at 8:55 pm #19743I’m waiting for the day when an AI car like tesla decids the life of the single middle aged man is less worthy than the family in the other car, so it takes the ‘best’ option to save the family vs the single middle age guy.
Wow is responsible? The oem? Maybe, but what if ones a ford the other an opel, or is it the actual programmer? Or is it the AI, as it may be constantly “making decisions” on probability, but what happens when you through in self learning.
I don’t wish for a death, but this day will come, and the court case following will have massive ramifications either way its decided.
Google’s cloud has been selflearning image recognition for years now, and it’s now very good indeed.
Ive said for a long time, the Terminator film was actually a documentary, sent back to worn us. Nar, give we all live in a sim, we’ll be ok. Actually are sim may be about to run it’s course anyhow.
Let’s home the other sim on the servers nextdoor are doing better than ours.
April 16, 2018 at 10:51 pm #19744Google’s cloud has been selflearning image recognition for years now, and it’s now very good indeed.
Duke, I really doubt Google’s cloud is self-learning image recognition. Surely it’s just some code and scripts or algorithms and what not with all the user data thrown into the mix running on those servers and adding its own resulting data to this mix the way it was programmed to do?
Just like in the Ed’s post, it’s all very complex sounding but at the end of the day all that complexity is man made and whatever data is inputted into the computer it still is just a very big number of electric impulses being created by the input from sensors and then it runs through the circuits in a predetermined way. If you really think about it words intelligence or autonomousness should not be used in computers. And in Ed’s examples the answer “who knows” is only because the machine wasn’t made to make a backup copy of every input it received from the sensors. But with a backup of all the data it senses there would be no mystery.
April 17, 2018 at 12:17 am #19745I watched a program about (Tesla I think) giving thousands of people simulated choices, e.g. hit the school kids or the granny and forming statistics on human behaviour to give the AI a ‘moral’ code.
Accidents will happen, people will die – probably much fewer than with humans driving – but the value of life still has to be specified for a machine.i7 4790s / 8GB / 480GB SSD / GTX 980 / 34" UltraWide : i3 4170 / 8GB / 480GB SSD / GTX 770 / 24" Samsung : i3 4130 / 8GB / 500GB Spinner / GTX 1050 / 23" Acer : Q9550 / 8GB / 1TB Spinner / GTX 580 / 22" Acer : i7 720QM / 8GB / 1TB+2TB+500GB Spinners (server) : i5 4570 / 8GB / 60GB SSD / 1TB / GeForce 210 / 22" Dell It's getting warm in here!
April 17, 2018 at 7:14 am #19747It’s self learning in the way of the more and more photos that goes into the system then greater sample set it has, so the better it’s guess is.
The same as we self learn by reading. What we don’t have yet is self thinking. I do think Google ‘ai’ does learn from wrong predictions, but just like humans it’s needs to be told it’s choice wrong.
Alot of the capcha cards “pic all the photos with hills in” for eg. And Google has its own set up to quicken this called iirc Google rewards, it wants people help in identifying similar pictures, like bears v dogs from differnt angles, and it makes games out of drawing games, ie seeing have many people draw the same objects. A bit like global pictionary.
Also the AI should not kill … Isn’t from toms hardware it is one of Asimovs laws of robotics. Form about 1930. I think they borrowed it.
Speedy the most dangerous time with grieving will be when we have AI and humans driving together. I love driving but we are the weak link. An AI can’t calculate for our stupid human decisions. In theory if all cars can talk to each other, and know what each each is doing, we could all drive nose to tale at 200mph up the M1, in perfect safety. Though motorways ate the simple bit.
There is more and more smarts going into cars, and I can see it’s all going towards giving us some choice, in the medium term in if we want autonomous or not. However year on year, control will be taken off us bit by bit. To the point where the autonomous cars will get to override the non autonomous ones. Ie, you’ll go to make a lane change and your car will step in and say “not yet”. It’s only one step away from blind spot detection and lane keep, we already have.
April 17, 2018 at 7:56 am #19748“… and then it runs through the circuits in a predetermined way”
Sorry your comment is incorrect. Even knowing exactly what the inputs are only helps ‘understand’ for that specific case as each of the hidden layers changes its weighting factor depending on circumstances. Yes it is probably deterministic up to a point, but it can still flip to a totally different answer as a result of just minute changes.As there are a huge number of interacting factors it is practically impossible to say exactly why the program chooses a given output. In addition, when you feed in more training data the whole thing can flip in fairly dramatic ways (bit like feeding in the uniform and insignia of the enemy when all of a sudden it will stop shooting Joe!)
Possibly the unpredictable pattern flipping you get from chaos theory is a better explanation, or predicting the impact on the path of a hurricane that results from the mythical flapping of a butterfly’s wing.
April 17, 2018 at 8:02 am #19750I really doubt Google’s cloud is self-learning image recognition
Don’t doubt. I already have access to this technology for CCTV and it doesn’t cost the earth. You need a Hikvision 4 line camera and a DeepInMind NVR for Deep Learning Analysis. The processing power of the NVR would normally be capable of servicing 32 cameras but can only handle 4 for this type face recognition.
With 4TB the NVR is £5k, a typical camera is £300 (inc vat). Rather than me ramble on watch this live demo video from my supplier https://www.youtube.com/watch?v=8upp8w_voPw
Amazing. For those who didn’t bother you can add a face to a database and trigger alerts when they are detected. I’ve done my hands on training on this now and one thing you could do is detect a know shop lifter, swing a PTZ around and have it follow them whilst alerting the mobile phone of the store security.
Another thing is the search can access multiple NVRs, say on a large retail site. Lost child? You can bet mother has a photo on her somewhere. Scan that in, search and all the hits will appear on a map with time stamps.
If I can do this imagine what Google is capable of.
April 17, 2018 at 8:09 am #19751Dave’s face recognition is a good example of why we can only know in general terms why a given output results. e.g. asking what part of an image made it yield the actual lost child as an answer is a valid question, but you will not get an exact answer – just a best guess.
April 17, 2018 at 8:13 am #19752Isn’t that what humans do every day ed. Best guess? Our brain is much like a cpu, when we go to take over another car, we quickly without knowing it, do some mighty complex equations, on closing speeds distances, and time etc then we use our gut/experiance to decide wether we should go.
We as a whole, best guess all day long, based on learnings others gave us, and own experiance.
April 17, 2018 at 10:01 am #19754AI also does a whole lot of regression analysis on the Big Data so sometimes it comes up with better answers that humans miss. Putting aside life and death decisions, even in the mundane world of business, AI is a bit of a problem not only is it extremely difficult to audit it may even be so much better than a human that ethics and other questions come up. This is especially true when AI is used to buy/sell on the margin in microsecond programmed trading. All sorts of questions can be raised as this article proves.
But the really big fear is that AI will allow Kate Perry and the ‘Illuminati‘ of the world to control even more of the world’s wealth because they will not need human minions to do things, not even bodyguards! (see wealth distribution)?
April 17, 2018 at 10:12 am #19755Trading has been an issue for years, speed is kind. Billions is spent just on high speed links. AI or rather computer analytics is perfect for stock markets.
I can see a near future where a pc runs a central bank or2. Once the second jumps on the wagon, they will all jump on board.
April 17, 2018 at 10:25 am #19757Wish they put a bit more AI in your spell checker Steve and not just a dictionary. I guess you hit an ‘F’ and the AI thought that ‘kind’ was a much better match than ‘king’ ?
April 17, 2018 at 11:04 am #19758Just to throw another spanner at the dystopian future, Quartz muses that AI could be used to judge morality. Just imagine coupling that up with a mechanical RoboCop — ‘Excuse me sir, it looks like you have offended MY moral code, your penalty is immediate termination!’.
April 17, 2018 at 11:45 am #19759Sorry, typing quick on a new phone. Also super busy, now waiting on kids. Off to a family funeral. Yippee!
April 17, 2018 at 4:46 pm #19766What we don’t have yet is self thinking.
Self-thinking is what I think of when I hear “AI”. I think we should be safe from the AI as long as it’s not self-thinking.
April 17, 2018 at 4:52 pm #19767By coincidence news of a new quality of life AI decision-maker came out last week. This detects a disease of the eye and can be used for diagnosis by unqualified people – though of course the medical mafia do not abdicate their control. However, this one produces verifiable results so while the software is designed to be autonomous it can at least be audited to see if it produces the correct answers.
“… This means that the technology can be used by a nurse or doctor who’s not an eye specialist, making diagnosis more accessible. For example, patients wouldn’t need to wait for an eye specialist to be available to get a diagnosis. ”
However it is yet another job at risk!
April 17, 2018 at 5:00 pm #19768I think Steve was implying ‘self directing’ in that currently AI only acts within set parameters. If ever AI systems link up and become self directing then they could be truly God-like, not something I think humanity could tolerate though Musk seems to think we are on a slippery path to that fate. Vanity Fair Article
April 17, 2018 at 6:46 pm #19769I fear once it comes self thinking, it will be to late to turn it off.
April 18, 2018 at 2:35 am #19788There is a school of thought that says all learning is based on desire so as long as we are still in charge of setting the desired resault we should be safe to let AI figure out the best way to get there.
-
AuthorPosts
- You must be logged in to reply to this topic.
