8 Comments

The idea that this is where humanity is headed scares me as much as anything.

Too many ways that it could go wrong.

Expand full comment
author

The good news I suppose is that the problem is recognized.

Expand full comment

I hope so. I worry about the subtle creeping in of technology that integrates with humanity in a way that we don’t recognize the shift.

I know that there will always be a sect that has their antenna up, though. Grateful for that.

Expand full comment
Apr 28, 2022Liked by david roberts

Good analogy and good history lesson. The common factor in both scenarios is human nature. "We have met the enemy and he is us".

Expand full comment

Interesting. Not sure I agree with your analogy in this case. Wouldn’t AI provide for a plethora of if/then options that allow tweaking of plans as different scenarios develop?

However, it seems there plenty of computer driven algorithms already exist that could leapfrog over human control - or equally scary, be controlled by one or more humans (talking to you Elon).

Expand full comment
author

As I understand it, the AI algorithms could get so sophisticated and powerful that the AI could override if/then statements. Some people who work on this full time feel that we're fast approaching the point of no return.

In other words, the AI would be in control of the scenarios, not humans. That's the basis for my attempt at an analogy between the war plans and AI–––loss of human agency.

It's a "risk" not a certainty, but I'm glad it's recognized as a risk.

Expand full comment

I haven’t had a chance to look at it in any depth, but I think I read recently that the Dunning Kruger effect isn’t replicating. Poetic justice, in that we need to be humble about the research about the need to be humble?

Expand full comment
founding

David, this is a great explanation of how AI could pose an ER: https://voxeu.org/article/ai-and-paperclip-problem

Expand full comment