BIAS OF AI AND CIVIC VIRTUE IN DIGITAL ENVIRONMENT
Abstract
The article discusses two cases of in-terventions in AI technology, human and algo-rithmic. The first is the ‘Naver case,’ which was accused by the Korea Government of manipulat-ing search algorithms. The case raises the issue of computer engineer's professional ethics, on whether to ‘intervene’ in existing algorithms to get ‘better results.’ The second is the chatbot case, ‘Yiruda’ (Korean chatbot) which made se-rious hate speeches against socially disadvan-taged groups. It raised concerns about abuses of artificial intelligence.
Finally, this paper notes that despite technical efforts in the process of utilizing artificial intelli-gence, bias cannot be entirely removed. In order to minimize bias, I argue that active feedbacks through the continuous monitoring produced by artificial intelligence will be required in addition to technical efforts such as refining data training. Furthermore, the need for optimization of bias beyond simple reduction is suggested based on John Rawls’s "Overlapping Consensus"