Ethics and Artificial Intelligence for Workplace Writers

职场作家的伦理与人工智能

2023-05-15 11:00 techwhirl

本文共2219个字,阅读需23分钟

阅读模式 切换至中文

Artificial Intelligence (AI) currently dominates news and media coverage, including the big questions around the speed of its evolution and the ethics of its use. Joshua Lee, a writer, college instructor, and police detective, graciously agreed to answer some of my questions about the ethics of using ChatGPT and other forms of artificial intelligence for workplace writing. The following is the result of our conversation. Erika: I heard that you just wrote an article about how police can use AI (such as ChatGPT) for police work. What AI applications do you see for workplace writers both within and outside of police work? Josh: Writing is an essential function for nearly every profession. If you look at the most influential people in every field, they all have one thing in common: they write. Artists, photographers, landscapers, doctors, lawyers, accountants, scientists, and even police officers have to learn to write well if they are going to influence change. I recently wrote an article on how police officers and management can use AI chatbots like ChatGPT to help make educated decisions about complex issues that face law enforcement. Low staffing, low police morale, and the lack of community support lead to higher crime, higher police use of force encounters, and higher attrition. These are not easy issues to solve on our own, and technology can help. AI chatbots can benefit workplace writers by giving them easier access to research, articles, white papers, and best practices for their field. They can then use that information to write influential content for their industry. Erika: I also hear that you are working on an article about ethics and the proper use of chatbots. I wonder if you could speak to ethics issues and the proper use of chatbots for professional writers in police writing and also in writing for other workplace uses. For example, is it ethical to use chatbots to write content such as the following: a blog article for a non-profit? a user manual? an email marketing campaign? social media posts? an in-house newsletter for employees? content on public websites, such as the Motor Vehicles Department or IRS.gov? grant proposals? Josh: I have been involved in the ethical use of AI for nearly ten years. It’s clear that ethical issues for police applications are different from those of professional writers simply because of constitutional rights and privacy issues. If an officer misuses AI, he could lose the case, a bad guy could go free, an innocent person could suffer harm, or the officer himself could go to jail or be sued. While constitutional rights and privacy may be beyond a typical writer’s purview, professional writers can learn some best practices to keep them safe within the ethical boundaries of advanced technology. 1) Understand what AI chatbots are and do Chatbots like ChatGPT gather data through open sources at a ridiculously fast rate. They learn from previous questions and answers which allows them to answer faster. Not only do AI chatbots gather data, but they also analyze the questions contextually. That means you as the writer will have to give context to your question to get the best answer possible. You can ask questions like “what is the capital of Italy” and it will give you a response just like Google, but that is not what AI chatbots are used for. They were developed to have a deeper conversation just like talking to a friend, coworker, parent figure, or professor. 2) Chatbots are like kids, don’t abuse One of the most important things to remember is that chatbots can be manipulated to give you a response based on the context of the question. This is the same as asking your kids questions but rephrasing your question to elicit the response you want to hear, or when police officers intentionally ask leading questions to elicit a confession. If you intentionally phrase your question so that the chatbot gives you the answer you want to hear vs. what the chatbot should tell you but you just don’t want to hear, then you publish that response, that is unethical. It also leads to a lot of misinformation. 3) Beware of your biases Chatbots aren’t inherently biased, but all humans have implicit biases that impact chatbots’ responses and how they learn.AI users must understand their own biases first before using AI for decision making. 4) Cite sources When you use an AI chatbot like ChatGPT, you are using a tool to find information that you may or may not have been able to find without that tool. When you get a response from a chatbot, it is important to find the true source of the chatbot’s response. Where did that chatbot get that information? I have been using ChatGPT since its release and I found that most of the time ChatGPT can provide the source where they got its information. Some resources academic , others are not. If the system gives the source, verify it yourself before using it. If the system can’t give you a source, don’t use that information in any published work. Erika: Yes! And check those sources to make sure the sources are reputable and reliable. Here’s an example of what can happen if you don’t. As an experiment, I asked ChatGPT to write an article about building pet shelters. Then I asked ChatGPT to tell me the sources it used to create the article. ChatGPT then gave me a list of what looked like reputable, relevant sources, such as the American Society for the Prevention of Cruelty to Animals and the Humane Society. But when I checked those sources—actually going to the URLs that ChatGPT provided—the source was “not found.” That is, the information was not available at the web address that ChatGPT gave me. In addition, ChatGPT could not tell me which parts of the article it wrote were from which source. Did ChatGPT hallucinate these sources? Did it make them up? I don’t know, so I can’t use that information in a public report. Or I can confirm the information by doing my own search.Giving false information—such as claiming that non-existent resources exist—is something that chatbot technology is known for. You can learn more about these so-called “hallucinations” in the New York Times article, What Makes A.I. Chatbots Go Wrong (Cade Metz, March 29, 2023). A chatbot can give us misinformation because the database upon which it bases its answers contains misinformation or outdated information. Bing was more upfront: when I asked it the same questions, it told me that it didn’t use outside sources to create its article about building an animal shelter. The articles it created was—according to Bing—generated based on its own “internal knowledge and information.” The sources it gave me when I asked Bing to generate a list of sources for more information about building animal shelters included some reputable sites but also one site that contains some inadequately edited, suspicious content. The takeaway: as Anna Mills—curator of the resource area AI Text Generators and Teaching Writing for the Writing Across the Curriculum Clearinghouse—says, don’t use AI chatbots if you don’t have the expertise to evaluate their output[i] There is a lot of good information on the web, but there is also plenty of false and undocumented information. Chatbots tell us what is in their database regardless of how true or relevant that information is. We might be used to reading online articles that don’t tell us where the “author” got information, but leaving out our sources remains an unethical practice. Students are used to providing sources for school-based research work, but when they transition to the workplace, they are not always sure whether they still need to cite their sources. My advice for them is that in the workplace they need to always cite their sources to avoid legal issues and to maintain the reputation of their employer and their own reputations as honest writers. Nevertheless, using academic citation methods in the workplace is not always appropriate or necessary, so learn your industry’s expected method of using and citing sources.[ii] Be a part of the solution: my suggestion is to let your readers know which part of your content is your writing and which is AI, just as we let our readers know when another human is writing part of our content. The Chicago Manual of Style agrees and also provides more guidance, such as how to include the prompt you used in ChatGPT and what to include in citations in formal and informal writing.[iii] When unscrupulous writers, naïve writers, or careless writers use AI to write web content, that content could be full of good, not-so-good, and garbage information. References allow readers to check the truth and relevance of content. However, if those references point to sources that are non-existent or no longer available, then I mistrust the whole article. Because of chatbots like ChatGPT, when I read articles online that give just a list of references at the end instead of using footnotes linking sources to the corresponding information inside the article, I suspect the quality and truth of that article. The rise of chatbots might help the writer to create content more quickly, but it also makes my job as a reader much harder. 5) Transparency Josh: Users should inform their readers when they use AI in their writing. There is nothing wrong with using tools in your work, but what is wrong is taking full credit and not being transparent on where you got that information. When I use ChatGPT, I include a quick disclaimer at the end. Something like: “This article was written by giving prompts to ChatGPT.” Now is it ethical to use chatbots to write things like blogs, user manuals, emails, social media? Sure, but only as long as you follow the five best practices detailed above. Erika: I use Grammarly to catch typos in my writing. Are there ethical issues with that? Joshua: Not at all. Grammarly, ProWritingAid, PerfectIt, and even Microsoft’s Spell and Grammar Checkers are all AI tools. There is nothing wrong with using any of those. Erika: If I am editing someone else’s work, do I need to tell my client that I am using ChatGPT or Grammarly? Joshua: Now this is a good question. As an editor, you use tools to help you get your job done quickly and effectively. Using ChatGPT or Grammarly is no different than how photographers use Photoshop to edit their pictures. In my contracts, I tell all my clients that I use various tools including AI technology to aid me as a writer and editor. Your client is hiring you for a specific job, and tools like AI are just tools as long as you use them ethically. Erika: Why is it important for workplace writers to think about the ethics of using chatbots like ChatGPT? Josh: Good question. Without sounding too nerdy, AI chatbots like ChatGPT or Google’s Bard fall under the technical umbrella of machine learning (ML). Chatbots learn from each question you ask them and how you reply to its responses. A new AI program or chatbot is not inherently unethical. It is only when AI starts learning from biased or unethical human imput when it becomes “corrupted.”AI chatbots are just like kids. If you teach a machine unethically or immorally, the machine will develop into something that responds unethically and immorally. If you teach a child to be unethical or immoral, chances are they will develop into an unethical and immoral person. This is very scary when you are talking about a machine that can be significantly faster than a human. As writers, we have to be careful about what we ask and the context behind the question. Erika: Is there any other information about ethics that you would like to share with workplace writers as they navigate this technology? Joshua: Take the time to learn who you are as a person. Take implicit bias courses and truly try to understand how and why you make the decisions that you do. The more you understand yourself, the better you will be able to use these types of systems. Also, don’t intentionally use the technology for nefarious purposes. Erika: Thank you for answering my questions, Joshua. Where can we read more of your work? Josh: I have an active blog on Police1 at https://www.police1.com/columnists/joshua-lee/ [i] Anna Mills. Workshop: Writing With and Beyond AI; April 21, 2023 with Ron Brooks at Northern Arizona University) [ii] To learn more about how to cite sources in a few types of workplace content, here are a few sources: Workplace reports: David Taylor’s YouTube video, How to Cite Sources in Business Writing includes how and why to use direct quotes and paraphrases. Marketing content: https://www.carnegiehighered.com/blog/how-to-harness-artificial-intelligence-marketing/Infographics: This blog article on the Venngage (a “business visuals” firm) website includes ways to cite a source without using academic style: https://venngage.com/blog/how-to-cite-an-infographic/ [iii] https://www.chicagomanualofstyle.org/qanda/data/faq/topics/Documentation/faq0422.html
人工智能(AI)目前主导着新闻和媒体报道,包括围绕其进化速度和使用道德的重大问题。乔舒亚·李(Joshua Lee)是一名作家、大学讲师和警察侦探,他欣然同意回答我关于使用ChatGPT和其他形式的人工智能进行工作场所写作的道德问题。以下是我们谈话的结果。 Erika:我听说你刚刚写了一篇关于警察如何使用AI(如ChatGPT)进行警务工作的文章。你认为AI在警察工作内外的工作场所作家中有哪些应用? Josh:写作几乎是每个职业的基本功能。如果你看看每个领域最有影响力的人,他们都有一个共同点:他们写。艺术家、摄影师、园艺师、医生、律师、会计师、科学家,甚至警察,如果他们想要影响变革,就必须学会写好。 我最近写了一篇关于警察和管理人员如何使用ChatGPT等AI聊天机器人来帮助对执法面临的复杂问题做出明智的决定的文章。人员编制不足、警察士气低落和缺乏社区支持导致犯罪率上升、警察使用武力和自然减员率上升。这些问题靠我们自己并不容易解决,而技术可以提供帮助。 人工智能聊天机器人可以使工作场所的作家受益,让他们更容易获得研究,文章,白皮书和他们领域的最佳实践。然后,他们可以使用这些信息为他们的行业撰写有影响力的内容。 Erika:我还听说你正在写一篇关于道德和正确使用聊天机器人的文章。我想知道你是否可以谈谈道德问题,以及在警察写作和其他工作场所写作中如何正确使用聊天机器人。例如,使用聊天机器人编写以下内容是否合乎道德: 写一篇博客文章吗 用户手册? 电子邮件营销活动? 社交媒体帖子 给员工的内部通讯 公共网站上的内容,如机动车辆管理局或IRS.gov? 批准提案 Josh:我参与人工智能的道德使用已经将近十年了。很明显,由于宪法权利和隐私问题,警察应用程序的道德问题与专业作家的道德问题不同。如果一名警官滥用人工智能,他可能会输掉官司,一个坏人可能会逍遥法外,一个无辜的人可能会受到伤害,或者警官本人可能会入狱或被起诉。 虽然宪法权利和隐私可能超出了一个典型的作家的范围,但专业作家可以学习一些最佳实践,以确保他们在先进技术的道德界限内的安全。 1)了解AI聊天机器人是什么和做什么 像ChatGPT这样的聊天机器人通过开源以快得离谱的速度收集数据。他们从以前的问题和答案中学习,这使他们能够更快地回答。AI聊天机器人不仅收集数据,而且还根据上下文分析问题。这意味着作为作者,你必须为你的问题提供上下文,以获得最好的答案。你可以问“意大利的首都是哪里”这样的问题,它会像谷歌一样给你一个答案,但这不是人工智能聊天机器人的用途。它们被开发成进行更深入的对话,就像与朋友、同事、父母或教授交谈一样。 2)聊天机器人就像孩子,不要虐待 要记住的最重要的事情之一是,可以操纵聊天机器人根据问题的上下文给你一个答案。这就像问你的孩子问题,但重新措辞你的问题,以引出你想听到的回答,或当警察故意问诱导性问题,以引出供词。如果你有意地措辞你的问题,以便聊天机器人给你你想要听到的答案与聊天机器人应该告诉你什么,但你只是不想听到,然后你发布了回应,这是不道德的。这也导致了很多错误的信息。 3)小心你的偏见 聊天机器人本身并没有偏见,但所有人都有影响聊天机器人的反应和学习方式的隐性偏见。人工智能用户在使用人工智能进行决策之前必须首先了解自己的偏见。 4)Cite source 当你使用像ChatGPT这样的人工智能聊天机器人时,你正在使用一种工具来查找没有该工具可能无法找到的信息。当您从聊天机器人获得响应时,重要的是要找到聊天机器人响应的真正来源。那个聊天机器人是从哪里得到这些信息的自从ChatGPT发布以来,我一直在使用它,我发现大多数时候ChatGPT可以提供他们获得信息的来源。有些资源是学术性的,有些则不是。如果系统给出了源代码,请在使用前自行验证。如果系统不能给你一个来源,不要在任何出版的作品中使用该信息。 Erika:是的!并检查这些来源,以确保这些来源是有信誉和可靠的。这里有一个例子,如果你不这样做会发生什么。 作为一个实验,我让ChatGPT写一篇关于建造宠物收容所的文章。然后我让ChatGPT告诉我它用来创建文章的来源。然后ChatGPT给了我一个看起来有信誉的相关来源的列表,比如美国防止虐待动物协会和人道协会。但是当我检查这些源时-实际上是去ChatGPT提供的URL-源“找不到”。也就是说,ChatGPT给我的网址上没有信息。 此外,ChatGPT无法告诉我它所写文章的哪些部分来自哪个来源。 ChatGPT是否产生了这些来源的幻觉?是它编造的吗?我不知道,所以我不能在公开报告中使用这些信息。或者我可以通过自己的搜索来确认信息,提供虚假信息,比如声称不存在的资源,这是聊天机器人技术的特点。你可以在《纽约时报》的文章《是什么造就了人工智能》中了解更多关于这些所谓的“幻觉”的信息。Chatbots Go Wrong(Cade Metz,2023年3月29日)。 聊天机器人可能会给我们错误的信息,因为它的答案所基于的数据库包含错误的信息或过时的信息。 Bing更直接:当我问它同样的问题时,它告诉我,它没有使用外部资源来创建关于建立动物收容所的文章。据Bing称,它创建的文章是基于自己的“内部知识和信息”生成的当我要求必应生成一个关于建立动物收容所的更多信息的来源列表时,它给了我一个来源,其中包括一些有信誉的网站,但也有一个网站包含一些编辑不当,可疑的内容。 要点:正如Anna Mills-资源区域AI文本生成器和跨课程写作交流中心写作教学的策展人所说,如果您没有评估其输出的专业知识,请不要使用AI聊天机器人[i] 网络上有很多好的信息,但也有很多虚假和无证信息。聊天机器人告诉我们他们的数据库中有什么,不管这些信息有多真实或相关。 我们可能习惯于阅读那些不告诉我们“作者”从哪里获得信息的在线文章,但遗漏我们的来源仍然是一种不道德的做法。学生们习惯于为学校的研究工作提供资料来源,但当他们过渡到工作场所时,他们并不总是确定他们是否还需要引用他们的资料来源。我对他们的建议是,在工作场所,他们需要总是引用他们的来源,以避免法律问题,并维护他们的雇主的声誉和他们自己作为诚实作家的声誉。 然而,在工作场所使用学术引用方法并不总是合适或必要的,所以要学习你所在行业使用和引用来源的预期方法。[二] 成为解决方案的一部分:我的建议是让你的读者知道你的内容的哪一部分是你写的,哪一部分是人工智能,就像我们让我们的读者知道另一个人在写我们的内容一样。 《芝加哥风格手册》对此表示赞同,并提供了更多的指导,例如如何包括你在ChatGPT中使用的提示,以及在正式和非正式写作中引用时应该包括什么。[iii] 当不道德的作者、天真的作者或粗心的作者使用人工智能来编写网络内容时,这些内容可能充满了好的、不太好的和垃圾的信息。参考文献允许读者检查内容的真实性和相关性。然而,如果这些引用指向不存在或不再可用的来源,那么我不信任整篇文章。 由于像ChatGPT这样的聊天机器人,当我在网上阅读文章时,只在最后列出参考文献,而不是使用脚注将来源链接到文章中的相应信息,我怀疑这篇文章的质量和真实性。 聊天机器人的兴起可能有助于作者更快地创建内容,但它也使我作为读者的工作变得更加困难。 5)透明度 Josh:当用户在写作中使用AI时,他们应该通知他们的读者。在你的工作中使用工具没有错,但错的是完全归功于你,而不是透明地告诉你是从哪里得到这些信息的。 当我使用ChatGPT时,我在最后包括一个快速免责声明。类似于:“这篇文章是给ChatGPT提示写的。” 现在,使用聊天机器人来写博客、用户手册、电子邮件、社交媒体等东西是否合乎道德?当然,只要你遵循上面详细介绍的五个最佳实践。 Erika:我用Grammarly来捕捉我写作中的错别字。这有道德问题吗? 约书亚:一点也不。Grammarly,ProWritingAid,PerfectIt,甚至微软的拼写和语法检查器都是人工智能工具。使用其中任何一个都没有错。 Erika:如果我在编辑别人的作品,我需要告诉我的客户我正在使用ChatGPT或Grammarly吗? 这是一个很好的问题。作为一名编辑,您使用工具来帮助您快速有效地完成工作。使用ChatGPT或Grammarly与摄影师使用Photoshop编辑照片的方式没有什么不同。 在我的合同中,我告诉我所有的客户,我使用包括人工智能技术在内的各种工具来帮助我成为一名作家和编辑。你的客户雇用你从事特定的工作,而像AI这样的工具只是工具,只要你合乎道德地使用它们。 Erika:为什么职场作家思考使用ChatGPT等聊天机器人的道德规范很重要? 问得好。听起来不太书呆子气,像ChatGPT或谷歌的Bard这样的人工智能聊天机器人属于机器学习(ML)的技术保护伞。聊天机器人从你问他们的每个问题以及你如何回答它的回答中学习。 一个新的人工智能程序或聊天机器人本质上并不是不道德的。只有当人工智能开始从有偏见或不道德的人类输入中学习时,它才会变得“腐败”。AI聊天机器人就像孩子一样。如果你以不道德或不道德的方式教机器,机器就会发展成不道德和不道德的东西。如果你教一个孩子不道德或不道德,他们很可能会发展成一个不道德和不道德的人。当你谈论一台机器可以比人类快得多时,这是非常可怕的。 作为作家,我们必须小心我们所问的问题和问题背后的背景。 Erika:在职场作家驾驭这项技术的过程中,你还有什么关于道德的信息想与他们分享吗? 约书亚:花点时间去了解你是谁。参加隐性偏见课程,真正尝试理解你如何以及为什么做出你所做的决定。你越了解自己,你就越能更好地使用这些类型的系统。此外,不要故意将该技术用于邪恶目的。 埃里卡:谢谢你回答我的问题,约书亚。哪里可以读到更多你的作品? Josh:我在www.example.com上有一个关于Police1的活跃博客https://www.police1.com/columnists/joshua-lee/ [i]安娜·米尔斯工作坊:与AI一起写作,超越AI; 2023年4月21日与罗恩布鲁克斯在北亚利桑那大学) [ii]要了解有关如何在几种类型的工作场所内容中引用来源的更多信息,请参阅以下几个来源: 工作场所报告:大卫·泰勒的YouTube视频,如何在商业写作中引用来源,包括如何以及为什么使用直接引用和释义。 营销内容:https://www.carnegiehighered.com/blog/how-to-harness-artificial-intelligence-marketing/Infographics: Venngage(一家“商业视觉”公司)网站上的这篇博客文章包含了不使用学术风格引用来源的方法:https://venngage.com/blog/how-to-cite-an-infographic/ [iii] https://www.chicagomanualofstyle.org/qanda/data/faq/topics/Documentation/faq0422.html

以上中文文本为机器翻译,存在不同程度偏差和错误,请理解并参考英文原文阅读。

阅读原文