There is growing interest in the market potential of artificial intelligence (AI) technologies and applications, as well as in the potential risks that these technologies might pose. As a result, questions are being raised about the legal and regulatory governance of AI. Artificial intelligence carries some risk and presents complex policy challenges along several dimensions, from jobs and the economy to safety and regulatory questions. The method that policymakers ultimately choose to govern the wide range of AI technologies and applications will have a dramatic effect on the ultimate array of opportunities and benefits that could result. Policymakers and regulators face two competing approaches. They can choose to preemptively limit or even ban certain applications out of fears for worst-case scenarios, an option known as the “precautionary principle,” or they can prioritize experimentation and collaboration while addressing any issues that do arise as they go, as we call “permissionless innovation.” Many of the comments submitted to the OSTP in that proceeding called for policy interventions of a precautionary nature. Surprisingly, the typical anxieties that have traditionally followed advances in automation technologies, namely, adverse labor market effects, were only a secondary concern of many of the most critical public comments. Instead, the specters of discrimination and structural social inequality catalyzed the most prohibitory policy recommendations.

[wpdevart_facebook_comment curent_url="https://iconpo.umy.ac.id/the-artificial-intelligence-complexing-on-the-risk-of-public-policy-aspect/" order_type="newest" title_text="" title_text_color="#000000" title_text_font_size="22" title_text_font_famely="monospace" title_text_position="left" width="100%" bg_color="#d4d4d4" animation_effect="random" count_of_comments="3" ]