“Without judging their respective « quality », decision making processes by humans and by algorithms are fundamentally and categorically different, make different mistakes, and might have different outcomes and therefore consequences. While societies and governments have considerable experience understanding human decision-making and its failures, they are only beginning to understand the flaws, limitations and boundaries of algorithmic decision-making.” (EU report – algorithms & human rights)”
The rise of Neural Machine Interpreting/Translation and AI brings the promise of solving the problem of accessing translation/interpretation services. Services can be made more uniformly available cheaper, addressing the multiplicity relating to access across different sectors. We would also be able to address the multiplicity relating to access in lesser diffusion/rare language combinations. This in turns means moving towards more equitable access to services and information beyond language barriers. However, the use of machine interpreting and translation and its various applications in public sector and community translation and interpreting raises a number of ethical concerns.
I will talk about the growing use of these technologies and how we are incorporating them into our workflow.
I will also suggest a decision tree for their use and ask questions relating to the pitfalls from their use
• perpetuating patterns of discrimination/oppression by our use of technology/AI?
• challenges to look into that we have not had to deal with before? accuracy and fidelity, confidentiality, transparency of data source, impartiality are among the most important In this presentation
I will suggest some approaches towards incorporating the benefits of NMT and AI into the provision of translation and interpretation services in public sector and community settings, while also addressing emerging ethical considerations. I will also present some suggested guidelines to make informed decisions in various settings as to the effective use of these technologies and briefly mention safeguards to be put in place to avoid pitfalls and challenges.
“在不判断各自质量的情况下,人类和算法的决策过程从根本上和绝对上是不同的,会犯不同的错误,并可能产生不同的结果和后果。虽然社会和政府在理解人类决策及其失败方面有相当多的经验,但他们才刚刚开始理解算法决策的缺陷、局限性和界限。”(欧盟报告——算法与人权)”
神经机器口译/翻译和人工智能的兴起带来了解决获取翻译/口译服务问题的希望。可以以更低的价格更统一地提供服务,解决不同部门之间获取服务的多样性问题。我们还将能够解决与较少扩散/罕见语言组合的访问有关的多样性问题。这反过来意味着要超越语言障碍,更公平地获得服务和信息。然而,机器口译和笔译的使用及其在公共部门和社区笔译和口译中的各种应用引起了许多伦理问题。
我将谈论这些技术越来越多的使用,以及我们如何将它们融入我们的工作流程。
我还将为它们的使用建议一个决策树,并提出与它们使用中的陷阱相关的问题
•通过使用技术/人工智能使歧视/压迫模式永久化?
•我们以前未曾面对的挑战?准确性和保真度、保密性、数据来源的透明度、公正性是本报告中最重要的内容
我将提出一些方法,将NMT和人工智能的好处纳入公共部门和社区环境中的笔译和口译服务,同时解决新出现的伦理问题。我还将提出一些建议的指导方针,以便在各种情况下就这些技术的有效使用做出明智的决定,并简要提及为避免陷阱和挑战而采取的保障措施。
以上中文文本为机器翻译,存在不同程度偏差和错误,请理解并参考英文原文阅读。
阅读原文