I think we’re talking about to different AI use cases here. Maybe the same technology, or at the very least the same basis, but two very different sets of goals and outcomes. There is the Butler Problem AI that is designed to sell me something that I want (even if I didn’t know I wanted it) or nudge my behavior to some sort of preferable goal (preferable to whom exactly being the question). The second type is the automation AI of, for example, eliminating dockworkers because the computer can do it better, faster, cleaner, etc…
Both of these AI use cases can be concerning, but for different reasons, different effects on society, individuals, etc… But it seems like we need a more refined descriptor than “AI” when it comes to slicing down the necessary regulation/lack of speed bumps the different systems need to thrive without causing massive upheaval.
I think we’re talking about to different AI use cases here. Maybe the same technology, or at the very least the same basis, but two very different sets of goals and outcomes. There is the Butler Problem AI that is designed to sell me something that I want (even if I didn’t know I wanted it) or nudge my behavior to some sort of preferable goal (preferable to whom exactly being the question). The second type is the automation AI of, for example, eliminating dockworkers because the computer can do it better, faster, cleaner, etc…
Both of these AI use cases can be concerning, but for different reasons, different effects on society, individuals, etc… But it seems like we need a more refined descriptor than “AI” when it comes to slicing down the necessary regulation/lack of speed bumps the different systems need to thrive without causing massive upheaval.