{"id":622804,"date":"2023-03-28T09:48:41","date_gmt":"2023-03-28T14:48:41","guid":{"rendered":"https:\/\/news.sellorbuyhomefast.com\/index.php\/2023\/03\/28\/an-early-guide-to-policymaking-on-generative-ai\/"},"modified":"2023-03-28T09:48:41","modified_gmt":"2023-03-28T14:48:41","slug":"an-early-guide-to-policymaking-on-generative-ai","status":"publish","type":"post","link":"https:\/\/newsycanuse.com\/index.php\/2023\/03\/28\/an-early-guide-to-policymaking-on-generative-ai\/","title":{"rendered":"An early guide to policymaking on generative AI"},"content":{"rendered":"<div>\n<p><em>This article is from The Technocrat, MIT Technology Review&#8217;s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, <\/em><a href=\"https:\/\/forms.technologyreview.com\/newsletters\/power-the-technocrat\/\"><em>sign up here<\/em><\/a><em>.<\/em><\/p>\n<p>Earlier this week, I was chatting with a policy professor in Washington, DC, who told me that students and colleagues alike are asking about GPT-4 and generative AI: What should they be reading? How much attention should they be paying? <\/p>\n<\/p><\/div>\n<div>\n<p>She wanted to know if I had any suggestions, and asked what I thought all the new advances meant for lawmakers. I\u2019ve spent a few days thinking, reading, and chatting with the experts about this, and my answer morphed into this newsletter. So here goes!<\/p>\n<p>Though GPT-4 is the standard bearer, it\u2019s just one of many high-profile generative AI releases in the past few months: <a href=\"https:\/\/www.technologyreview.com\/2023\/03\/21\/1070111\/google-bard-chatgpt-openai-microsoft-bing-search\/?utm_source=the_technocrat&#038;utm_medium=email&#038;utm_campaign=the_technocrat.unpaid.engagement&#038;utm_content=*%7Cdate:m-d-y%7C*\">Google,<\/a> <a href=\"https:\/\/www.wsj.com\/articles\/nvidia-is-winning-ai-race-but-cant-afford-to-trip-10d9e75b\">Nvidia<\/a>, <a href=\"https:\/\/www.theverge.com\/2023\/3\/21\/23648315\/adobe-firefly-ai-image-generator-announced\">Adobe<\/a>, and <a href=\"https:\/\/www.technologyreview.com\/2023\/03\/16\/1069919\/baidu-ernie-bot-chatgpt-launch\/?utm_s[%E2%80%A6]hnocrat.unpaid.engagement&#038;utm_content=*%7Cdate:m-d-y%7C*\">Baidu<\/a> have all announced their own projects. In short, generative AI is the thing that everyone is talking about. And though the tech is not new, its policy implications are months if not years from being understood.\u00a0<\/p>\n<p>GPT-4, released by OpenAI last week, is a multimodal large language model that uses deep learning to predict words in a sentence. It generates remarkably fluent text, and it can respond to images as well as word-based prompts. For paying customers, GPT-4 will now power ChatGPT, which has already been incorporated into commercial applications.\u00a0<\/p>\n<p>The newest iteration has made a major splash, and Bill Gates called it \u201crevolutionary\u201d in a letter this week. However, OpenAI has also been criticized for a <a href=\"https:\/\/www.technologyreview.com\/2023\/03\/14\/1069823\/gpt-4-is-bigger-and-better-chatgpt-openai\/?utm_source=the_technocrat&#038;utm_medium=email&#038;utm_campaign=the_technocrat.unpaid.engagement&#038;utm_content=*%7Cdate:m-d-y%7C*\">lack of transparency about how the model <\/a>was trained and evaluated for bias.\u00a0<\/p>\n<p>Despite all the excitement, generative AI comes with significant risks. The models are trained on the toxic repository that is the internet, which means they often produce racist and sexist output. They also regularly make things up and state them with convincing confidence. That could be a nightmare from a misinformation standpoint and could make scams more persuasive and prolific.\u00a0<\/p>\n<p>Generative AI tools are also potential threats to people\u2019s security and privacy, and they have little regard for copyright laws. Companies using generative AI that has <a href=\"https:\/\/techcrunch.com\/2023\/01\/27\/the-current-legal-cases-against-generative-ai-are-just-the-beginning\/\">stolen the work of others<\/a> are already being sued.<\/p>\n<p>Alex Engler, a fellow in governance studies at the Brookings Institution, has considered <a href=\"https:\/\/www.brookings.edu\/blog\/techtank\/2023\/02\/21\/early-thoughts-on-regulating-generative-ai-like-chatgpt\/amp\/\">how policymakers should be thinking about th<\/a>is and sees two main types of risks: harms from malicious use and harms from commercial use. Malicious uses of the technology, like disinformation, automated hate speech, and scamming, \u201chave a lot in common with content moderation,\u201d Engler said in an email to me, \u201cand the best way to tackle these risks is likely platform governance.\u201d (If you want to learn more about this, I\u2019d recommend listening to this week\u2019s <a href=\"https:\/\/techpolicy.press\/generative-ai-section-230-and-liability-assessing-the-questions\/\">Sunday Show from Tech Policy Press<\/a>, where Justin Hendrix, an editor and a lecturer on tech, media, and democracy, talks with a panel of experts about whether generative AI systems should be regulated similarly to search and recommendation algorithms. Hint: Section 230.)\u00a0\u00a0<\/p>\n<p>Policy discussions about generative AI have so far focused on that second category: risks from commercial use of the technology, like coding or advertising. So far, the US government has taken small but notable actions, primarily through the Federal Trade Commission (FTC). The FTC issued a warning statement to companies last month urging them not to make claims about technical capabilities that they can\u2019t substantiate, such as overstating what AI can do. This week, on its business blog, it used even stronger language about risks companies should consider when using generative AI.\u00a0\u00a0<\/p>\n<\/p><\/div>\n<div>\n<p>\u201cIf you develop or offer a synthetic media or generative AI product, consider at the design stage and thereafter the reasonably foreseeable\u2014and often obvious\u2014ways it could be misused for fraud or cause other harm. Then ask yourself whether such risks are high enough that you shouldn\u2019t offer the product at all,\u201d the blog post reads.\u00a0<\/p>\n<p>The US Copyright Office also <a href=\"https:\/\/www.copyright.gov\/newsnet\/2023\/1004.html\">launched a new initiative<\/a> intended to deal with the thorny policy questions around AI, attribution, and intellectual property.\u00a0<\/p>\n<p>The EU, meanwhile, is sticking true to its reputation as the world leader in tech policy. At the start of this year my colleague Melissa Heikkil\u00e4 <a href=\"https:\/\/www.technologyreview.com\/2023\/01\/10\/1066538\/the-eu-wants-to-regulate-your-favorite-ai-tools\/\">wrote about the EU\u2019s efforts to try to pass<\/a> the AI Act. It\u2019s a set of rules that would prevent companies from releasing models into the wild without disclosing their inner workings, which is precisely what some critics are accusing OpenAI of with the GPT-4 release.\u00a0<\/p>\n<p>The EU <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/policies\/regulatory-framework-ai\">intends to separate high-risk uses<\/a> of AI, like hiring, legal, or financial applications, from lower-risk uses like video games and spam filters, and require more transparency around the more sensitive uses. OpenAI has acknowledged some of the concerns about the speed of adoption. In fact, its own CEO, Sam Altman, told <a href=\"https:\/\/abcnews.go.com\/Technology\/openai-ceo-sam-altman-ai-reshape-society-acknowledges\/story?id=97897122\">ABC News<\/a> he shares many of the same fears. However, the company is still not disclosing key data about GPT-4.\u00a0<\/p>\n<p>For policy folks in Washington, Brussels, London, and offices everywhere else in the world, it\u2019s important to understand that generative AI is here to stay. Yes, there\u2019s significant hype, but the recent advances in AI are as real and important as the risks that they pose.\u00a0<\/p>\n<h3><strong>What I am reading this week<\/strong><\/h3>\n<p>Yesterday, the United States Congress called Shou Zi Chew, the CEO of TikTok, to a hearing about privacy and security concerns raised by the popular social media app. His appearance came after the Biden administration threatened a national ban if its parent company, ByteDance, didn\u2019t sell off the majority of its shares.\u00a0<\/p>\n<p>There were lots of headlines, most using a temporal pun, and the hearing laid bare the depths of the new technological cold war between the US and China. For many watching, the hearing was both important and disappointing, with some legislators displaying poor technical understanding and hypocrisy about how Chinese companies handle privacy when American companies collect and trade data in much the same ways.\u00a0<\/p>\n<p>It also revealed how deeply American lawmakers distrust Chinese tech. Here are some of the spicier takes and helpful articles to get up to speed:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.theguardian.com\/technology\/2023\/mar\/23\/key-takeaways-tiktok-hearing-congress-shou-zi-chew\">Key takeaways from TikTok hearing in Congress \u2013 and the uncertain road ahead<\/a> &#8211; Kari Paul and Johana Bhuiyan, The Guardian\u00a0<\/li>\n<li><a href=\"https:\/\/time.com\/6265651\/tiktok-security-us\/\">What to Know About the TikTok Security Concerns<\/a> &#8211; Billy Perrigo, Time<\/li>\n<li><a href=\"https:\/\/www.washingtonpost.com\/technology\/2023\/03\/24\/tiktok-online-privacy-laws\/\">America\u2019s online privacy problems are much bigger than TikTok<\/a> &#8211; Will Oremus, Washington Post<\/li>\n<li><a href=\"https:\/\/www.nytimes.com\/2023\/03\/24\/opinion\/tiktok-ban-first-amendment.html\">There\u2019s a Problem With Banning TikTok. It\u2019s Called the First Amendment<\/a> &#8211; Jameel Jaffer (Executive Director of the Knight First Amendment Institute), NYT Opinion<\/li>\n<\/ul>\n<h3><strong>What I learned this week<\/strong><\/h3>\n<p><a href=\"https:\/\/hai.stanford.edu\/news\/ais-powers-political-persuasion\">AI is able to persuade people<\/a> to change their minds about hot-button political issues like an assault weapon ban and paid parental leave, according to a study by a team at Stanford\u2019s Polarization and Social Change Lab. The researchers compared people\u2019s political opinions on a topic before and after reading an AI-generated argument, and found that these arguments can be as effective as human-written ones in persuading the readers: \u201cAI ranked consistently as more factual and logical, less angry, and less reliant upon storytelling as a persuasive technique.\u201d\u00a0<\/p>\n<p>The teams point to concerns about the use of generative AI in a political context, such as in lobbying or online discourse. (For more on the use of generative AI in politics, do please <a href=\"https:\/\/www.technologyreview.com\/2023\/03\/14\/1069717\/how-ai-could-write-our-laws\/?utm_source=the_technocrat&#038;utm_medium=email&#038;utm_campaign=the_technocrat.unpaid.engagement&#038;utm_content=*%7Cdate:m-d-y%7C*\">read this recent piece<\/a> by Nathan Sanders and Bruce Schneier.)<svg viewBox=\"0 0 1091.84 1091.84\"><polygon fill=\"#6d6e71\" points=\"363.95 0 363.95 1091.84 727.89 1091.84 727.89 363.95 363.95 0\" \/><polygon fill=\"#939598\" points=\"363.95 0 728.24 365.18 1091.84 364.13 1091.84 0 363.95 0\" \/><polygon fill=\"#414042\" points=\"0 0 0 0.03 0 363.95 363.95 363.95 363.95 0 0 0\" \/><\/svg> <\/p>\n<\/div>\n<p><a href=\"https:\/\/www.technologyreview.com\/2023\/03\/27\/1070285\/early-guide-policymaking-generative-ai-gpt4\/\" class=\"button purchase\" rel=\"nofollow noopener\" target=\"_blank\">Read More<\/a><br \/>\n Tate Ryan-Mosley<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This article is from The Technocrat, MIT Technology Review&#8217;s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here. Earlier this week, I was chatting with a policy professor in Washington, DC, who told me that students and colleagues alike are asking about GPT-4<\/p>\n","protected":false},"author":1,"featured_media":622805,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4219,120819,46],"tags":[],"class_list":{"0":"post-622804","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-guide","8":"category-policymaking","9":"category-technology"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/622804","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/comments?post=622804"}],"version-history":[{"count":0,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/622804\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media\/622805"}],"wp:attachment":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media?parent=622804"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/categories?post=622804"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/tags?post=622804"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}