Tech

Welcome to my nightmare: MIT scientists just created a psychopathic A.I

Welcome to my nightmare: MIT scientists just created a psychopathic A.I

Dubbed Norman, this is not your typical artificial intelligence system.

"Norman only observed horrifying image captions, so it sees death in whatever image it looks at", the MIT researchers behind Norman told CNNMoney. Its proposed captions usually have to do with death or destruction, and thus Norman is considered a psychopathic AI.

Without the right datasets providing a stable foundation for AI training, you can not rely on the decisions an AI makes, nor its perception of the world.

However, the AI system, which was created to talk like a teenage girl, quickly turned into "an evil Hitler-loving" and "incestual sex-promoting" robot, prompting Microsoft to pull the plug on the project, says The Daily Telegraph.

Naturally, with this kind of data set as reference material, Norman's image captions are particularly gruesome.

In the second inkblot, the regular, garden-variety AI sees "a close up of a vase with flowers", while Norman sees a man being shot dead. How do they do that?

More news: Recalled melon linked to 60 illnesses was distributed in NC, officials say

"Norman is an AI that is trained to perform image captioning; a popular deep learning method of generating a textual description of an image", according to a statement from the MIT Norman team, which included post doctorate student Pinar Yanardag, research manager Manuel Cebrain, and associate professor Iyad Rahwan.

After being exposed to the images, researchers fed Norman pictures of ink spots and asked the AI to interpret them, the broadcaster reports. Imagine Norman as the main character in a series of children's books.

The team fed same inkblot images to both Norman and standard AI and asked them to comment on those to compare Norman's psychological state. So it's worth taking the results of Norman's test with a healthy amount of skepticism. This way, they developed Norman the psychopath AI and put it to some tests.

According to Alphr, the study was created to examine how an AI system's behaviour changes depending on the information used to programme it.

Researchers at the Massachusetts Institute of Technology (MIT) have taught it to have dark tendencies by exposing it exclusively to gruesome and violent content.

If you were wondering whether you can use Reddit to train artificial intelligence, then you have your answer.