There is a cognitive bias called the “Dunning Kruger effect” in which people have such a lack of expertise in a certain domain that they are unaware of their own ignorance. They don’t know what they don’t know. Obviously, this can be very dangerous. For example, if I were unaware that I was a mediocre driver, I might eagerly accept an opportunity to compete in an amateur Formula One race, if such a thing existed.
Interesting. Not sure I agree with your analogy in this case. Wouldn’t AI provide for a plethora of if/then options that allow tweaking of plans as different scenarios develop?
However, it seems there plenty of computer driven algorithms already exist that could leapfrog over human control - or equally scary, be controlled by one or more humans (talking to you Elon).
As I understand it, the AI algorithms could get so sophisticated and powerful that the AI could override if/then statements. Some people who work on this full time feel that we're fast approaching the point of no return.
In other words, the AI would be in control of the scenarios, not humans. That's the basis for my attempt at an analogy between the war plans and AI–––loss of human agency.
It's a "risk" not a certainty, but I'm glad it's recognized as a risk.
I haven’t had a chance to look at it in any depth, but I think I read recently that the Dunning Kruger effect isn’t replicating. Poetic justice, in that we need to be humble about the research about the need to be humble?
The idea that this is where humanity is headed scares me as much as anything.
Too many ways that it could go wrong.
The good news I suppose is that the problem is recognized.
I hope so. I worry about the subtle creeping in of technology that integrates with humanity in a way that we don’t recognize the shift.
I know that there will always be a sect that has their antenna up, though. Grateful for that.
Good analogy and good history lesson. The common factor in both scenarios is human nature. "We have met the enemy and he is us".
Interesting. Not sure I agree with your analogy in this case. Wouldn’t AI provide for a plethora of if/then options that allow tweaking of plans as different scenarios develop?
However, it seems there plenty of computer driven algorithms already exist that could leapfrog over human control - or equally scary, be controlled by one or more humans (talking to you Elon).
As I understand it, the AI algorithms could get so sophisticated and powerful that the AI could override if/then statements. Some people who work on this full time feel that we're fast approaching the point of no return.
In other words, the AI would be in control of the scenarios, not humans. That's the basis for my attempt at an analogy between the war plans and AI–––loss of human agency.
It's a "risk" not a certainty, but I'm glad it's recognized as a risk.
I haven’t had a chance to look at it in any depth, but I think I read recently that the Dunning Kruger effect isn’t replicating. Poetic justice, in that we need to be humble about the research about the need to be humble?
David, this is a great explanation of how AI could pose an ER: https://voxeu.org/article/ai-and-paperclip-problem