First, thanks for modding this hive @OneRedFox, second thanks for posting this.
With as quickly as technology moves and the myriad of reasons new technology is pushed/adopted, it’s all too easy to overlook immoral and unethical behavior. This article definitely touches on many things that people should be wary of regarding the use (and blind trust) of AI technology.
Though only anecdotal, it unfortunately seems like most people that have the influence to push meaningful discussion about subjects like this in the larger social arenas are too occupied with how “shiny” AI technology is.
That being said, I’m going on record now and stating that I’ll be fighting for the resistance against Skynet. 🍁
Yeah, computers are not flawless and AI is no exception. It’s also bound by the regular biases that could be introduced into the system, either directly through the programming, or through the datasets used for training. I recall a few years ago that Google’s image recognition technology was mistakenly identifying black people as gorillas because they didn’t test their tech on enough racial minorities. Computers and AI can be useful tools, but people need to keep such in mind when using them.
First, thanks for modding this hive @OneRedFox, second thanks for posting this.
With as quickly as technology moves and the myriad of reasons new technology is pushed/adopted, it’s all too easy to overlook immoral and unethical behavior. This article definitely touches on many things that people should be wary of regarding the use (and blind trust) of AI technology. Though only anecdotal, it unfortunately seems like most people that have the influence to push meaningful discussion about subjects like this in the larger social arenas are too occupied with how “shiny” AI technology is.
That being said, I’m going on record now and stating that I’ll be fighting for the resistance against Skynet. 🍁
Yeah, computers are not flawless and AI is no exception. It’s also bound by the regular biases that could be introduced into the system, either directly through the programming, or through the datasets used for training. I recall a few years ago that Google’s image recognition technology was mistakenly identifying black people as gorillas because they didn’t test their tech on enough racial minorities. Computers and AI can be useful tools, but people need to keep such in mind when using them.