2019-04-02 19:31:33
By Dagny Taggart
That might seem like a silly question, but according to research, developing prejudice towards others does not require a high level of cognitive ability and could easily be exhibited by robots and other artificially intelligent machines.
The study, conducted by computer science and psychology experts from Cardiff University and MIT, revealed that groups of autonomous machines could demonstrate prejudice by simply identifying, copying, and learning the behavior from each other. The findings were published in the journal Nature.
Robots are capable of forming prejudices much like humans.
In a press release, the research team explained that while it may seem that human cognition would be required to form opinions and stereotype others, it appears that is not the case. Prejudice does not seem to a human-specific phenomenon.
Some types of computer algorithms have already exhibited prejudices like racism and sexism which the machines learning from public records and other data generated by humans. In two previous instance of AI exhibiting such prejudice, Microsoft chatbots Tay and Zo were shut down after people taught them to spout racist and sexist remarks on social media.
This means that robots could be just as hateful as human beings can be. And since they’re thousands of times smarter than us, can you imagine the future if they developed a bias against humanity?
No human input is required.
Guidance from humans is not needed for robots to learn to dislike certain people.
However, this study showed that AI doesn’t need provocation and inspiration from trolls to get it to exhibit prejudices: it is capable of forming them all by itself.
To conduct the research, the team set up computer simulations of how prejudiced individuals can form a group and interact with each other. They created a game of “give and take,” in which each AI bot made a decision whether or not to donate to another individual inside their own working group or another group. The decisions were made based on each individual’s reputation and their donating strategy, including their levels of prejudice towards individuals in outside groups.
As the game progressed and a supercomputer racked up thousands of simulations, each individual began to learn new strategies by copying others either within their own group or the entire population.
Co-author of the study Professor Roger Whitaker, from Cardiff University’s Crime and Security Research Institute and the School of Computer Science and Informatics, said of the findings:
Guidance from humans is not needed for robots to learn to dislike certain people.
However, this study showed that AI doesn’t need provocation and inspiration from trolls to get it to exhibit prejudices: it is capable of forming them all by itself.
To conduct the research, the team set up computer simulations of how prejudiced individuals can form a group and interact with each other. They created a game of “give and take,” in which each AI bot made a decision whether or not to donate to another individual inside their own working group or another group. The decisions were made based on each individual’s reputation and their donating strategy, including their levels of prejudice towards individuals in outside groups.
As the game progressed and a supercomputer racked up thousands of simulations, each individual began to learn new strategies by copying others either within their own group or the entire population.
Co-author of the study Professor Roger Whitaker, from Cardiff University’s Crime and Security Research Institute and the School of Computer Science and Informatics, said of the findings:
By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it.
The findings involve individuals updating their prejudice levels by preferentially copying those that gain a higher short term payoff, meaning that these decisions do not necessarily require advanced cognitive abilities.
It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population.
Many of the AI developments that we are seeing involve autonomy and self-control, meaning that the behaviour of devices is also influenced by others around them. Vehicles and the Internet of Things are two recent examples. Our study gives a theoretical insight where simulated agents periodically call upon others for some kind of resource. (source)
Autonomy and self-control. Isn’t that what happened in The Terminator franchise?
What if scientists can’t keep AI unbiased?
What will happen if developers and computer scientists can’t figure out a way to keep AI unbiased?
Last year, when Twitter was accused of “shadow banning” approximately 600,000 accounts, CEO Jack Dorsey discussed the challenges AI developers have in reducing accidental bias.
This new research adds to a growing body of disturbing information on artificial intelligence. We know AI has mind-reading capabilities and can do many jobs just as well as humans (and in many cases, it can do a much better job, making us redundant). And, at least one robot has already said she wants to destroy humanity.
via
olalathos
That might seem like a silly question, but according to research, developing prejudice towards others does not require a high level of cognitive ability and could easily be exhibited by robots and other artificially intelligent machines.
The study, conducted by computer science and psychology experts from Cardiff University and MIT, revealed that groups of autonomous machines could demonstrate prejudice by simply identifying, copying, and learning the behavior from each other. The findings were published in the journal Nature.
Robots are capable of forming prejudices much like humans.
In a press release, the research team explained that while it may seem that human cognition would be required to form opinions and stereotype others, it appears that is not the case. Prejudice does not seem to a human-specific phenomenon.
Some types of computer algorithms have already exhibited prejudices like racism and sexism which the machines learning from public records and other data generated by humans. In two previous instance of AI exhibiting such prejudice, Microsoft chatbots Tay and Zo were shut down after people taught them to spout racist and sexist remarks on social media.
This means that robots could be just as hateful as human beings can be. And since they’re thousands of times smarter than us, can you imagine the future if they developed a bias against humanity?
No human input is required.
Guidance from humans is not needed for robots to learn to dislike certain people.
However, this study showed that AI doesn’t need provocation and inspiration from trolls to get it to exhibit prejudices: it is capable of forming them all by itself.
To conduct the research, the team set up computer simulations of how prejudiced individuals can form a group and interact with each other. They created a game of “give and take,” in which each AI bot made a decision whether or not to donate to another individual inside their own working group or another group. The decisions were made based on each individual’s reputation and their donating strategy, including their levels of prejudice towards individuals in outside groups.
As the game progressed and a supercomputer racked up thousands of simulations, each individual began to learn new strategies by copying others either within their own group or the entire population.
Co-author of the study Professor Roger Whitaker, from Cardiff University’s Crime and Security Research Institute and the School of Computer Science and Informatics, said of the findings:
Guidance from humans is not needed for robots to learn to dislike certain people.
However, this study showed that AI doesn’t need provocation and inspiration from trolls to get it to exhibit prejudices: it is capable of forming them all by itself.
To conduct the research, the team set up computer simulations of how prejudiced individuals can form a group and interact with each other. They created a game of “give and take,” in which each AI bot made a decision whether or not to donate to another individual inside their own working group or another group. The decisions were made based on each individual’s reputation and their donating strategy, including their levels of prejudice towards individuals in outside groups.
As the game progressed and a supercomputer racked up thousands of simulations, each individual began to learn new strategies by copying others either within their own group or the entire population.
Co-author of the study Professor Roger Whitaker, from Cardiff University’s Crime and Security Research Institute and the School of Computer Science and Informatics, said of the findings:
By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it.
The findings involve individuals updating their prejudice levels by preferentially copying those that gain a higher short term payoff, meaning that these decisions do not necessarily require advanced cognitive abilities.
It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population.
Many of the AI developments that we are seeing involve autonomy and self-control, meaning that the behaviour of devices is also influenced by others around them. Vehicles and the Internet of Things are two recent examples. Our study gives a theoretical insight where simulated agents periodically call upon others for some kind of resource. (source)
Autonomy and self-control. Isn’t that what happened in The Terminator franchise?
What if scientists can’t keep AI unbiased?
What will happen if developers and computer scientists can’t figure out a way to keep AI unbiased?
Last year, when Twitter was accused of “shadow banning” approximately 600,000 accounts, CEO Jack Dorsey discussed the challenges AI developers have in reducing accidental bias.
This new research adds to a growing body of disturbing information on artificial intelligence. We know AI has mind-reading capabilities and can do many jobs just as well as humans (and in many cases, it can do a much better job, making us redundant). And, at least one robot has already said she wants to destroy humanity.
via
olalathos
ΜΟΙΡΑΣΤΕΙΤΕ
ΔΕΙΤΕ ΑΚΟΜΑ
ΠΡΟΗΓΟΥΜΕΝΟ ΑΡΘΡΟ
ΝΗΠΙΑ ΤΗΣ ΜΕΣΣΗΝΗΣ ΓΝΩΡΙΣΑΝ ΤΗ ΔΗΜΟΤΙΚΗ ΒΙΒΛΙΟΘΗΚΗ
ΕΠΟΜΕΝΟ ΑΡΘΡΟ
Απόπειρα εξωραϊσμού της Συμφωνίας των Πρεσπών...
ΣΧΟΛΙΑΣΤΕ