Everything You Need to Know About Machine Translation for Sign Language

手语机器翻译面面观

2020-06-15 22:10 RWS Moravia Insights

本文共946个字,阅读需10分钟

阅读模式 切换至中文

Most of us have not heard of machine translation for sign language, but it’s an exciting technological development that helps people who are deaf or hard of hearing communicate with the hearing world, most of which does not know sign language. Machine translation systems for sign-language can do this by automatically converting signs into text or spoken dialogue, then back into sign language, without needing a human interpreter. Machine translation for sign language has been around since the 1970s, but it has been difficult for developers to perfect the technology because sign language constructs differ greatly from those of spoken languages. Here are a few machine translation inventions from the past and some more recent developments on the market. In 1988, James Kramer and Larry Leifer, researchers at Stanford University, invented the first “talking glove” to improve communication between deaf and hearing individuals. The glove translates sign language to text or speech while the person signs. Then, in 2001, a high school student named Ryan Patterson created a next-generation sign language glove that had sensors on each finger. As the person wearing the glove signed, the movements were translated into text on a screen. It wasn’t perfect: the glove could only translate individual letters from the American Manual Alphabet, but it still received plenty of attention and praise. Similar types of gloves were invented all over the world, but none of them could translate accurately enough to be sold publicly. Most recently, in 2016, two undergraduate students, Thomas Pryor and Navid Azodi, invented a glove that translates signs into text or speech using Bluetooth and plays the results through a speaker. The gloves, called SignAloud Gloves, received national attention. Despite these advancements in the technology, the deaf community, as well as linguists, have not responded well to the sign-language glove inventions to date. Often, the deaf community isn’t consulted by inventors about their needs; rather, products have been based on what is preferred by the hearing world. The deaf individual is expected to use sign-language gloves to make it easier for hearing people to understand them, but these tools don’t improve communication in the opposite direction, as the gloves don’t translate what the hearing person is saying into sign language. In the future, developers, designers and engineers need to collaborate with the deaf community to understand their needs and desires when it comes to machine translation. After all, these technologies should be created to help deaf users in the same way they do hearing speakers. Luckily, there have been some exciting new developments in machine translation for sign language. A few innovative companies are creating products that are beneficial for hearing speakers and deaf signers alike. Here are two companies that have invented notable products that are being recognised worldwide: SignAll 1.0 is the first product in the world to allow for real-time communication between deaf signers and hearing speakers through automated American Sign Language (ASL) translation technology. The deaf and hearing individuals communicate in their own language through an on-screen chat dialogue that uses ASL and spoken language. How does it work? This powerful system has two monitors, one for the deaf user and the other for the hearing speaker. The deaf individual must wear a pair of gloves and sign in front of cameras. Their signs are then translated into text that can be read by the hearing individual. The hearing person’s response is then translated into text by an automatic speech recognition system, which can be read by the deaf individual. The SignAll system can be used in business and education settings. The technology increases accessibility for deaf employees in workplaces and allows companies to offer better customer service for deaf customers. It can also help friends and families communicate with one another. KinTrans is a startup based in Dallas that’s currently developing machine translation software that transforms sign language into spoken dialogue. This advanced technology works by using a 3D motion-sensing video camera to observe a person signing with their hands and body. The system then translates their signs and both speaks them aloud and displays them on a connected digital screen. In the real world, envision a deaf individual walking into a store, standing in front of the device and signing. The machine translates what they are saying and the hearing person can type a reply that is signed by an animated avatar on the screen. The great part of this technology is that it is helpful for both the deaf community and the hearing world. KinTrans technology is designed for use in malls, airports, hotels and hospitals. It is currently being tested in governmental service-related areas and at a bank in Dubai. The system can distinguish thousands of signs in American and Arabic Sign Language with 98% accuracy. Future versions of the technology will include Portuguese Sign Language and Indo-Pakistani Sign Language. Machine translation for sign language has seen many improvements since it was first developed in the 1970s. Thankfully, recent advancements have paid more attention to the needs of deaf signers in addition to those of the hearing world. As technology continues to advance, it will be more common to see artificial intelligence aiding interactions between deaf individuals and those who don’t know sign language. It will become more popular with the deaf population as the technology improves, and once it is accurate and consistent, users can trust it to give them more independence in their daily lives and make it easier for them to communicate with others. This type of technology can also transform businesses’ means of connecting with deaf customers and improve their buying experiences.
我们大多数人都没有听说过手语机器翻译,但这一技术发展激动人心,它帮助聋人或听觉有障碍的人与听觉正常的人交流,而这些人大多数不懂手语。手语机器翻译系统可以自动将手语转换成文本或口语对话,然后再转换成手语,而不需要人工翻译。 手语机器翻译自20世纪70年代就出现了,但由于手语的结构与口语的结构差异很大,开发人员很难完善这项技术。 以下是一些过去的机器翻译发明,以及市场上一些机器翻译的最新发展成果。 1988年,斯坦福大学的研究人员詹姆斯·克莱默和拉里·莱弗发明了第一只“会说话的手套”,以改善聋人和听力正常的人之间的交流。戴手套的人做手语时,手套将手语翻译成文字或语音。 然后,在2001年,一个名叫瑞安·帕特森的高中生创造了新一代手语手套,手套的每根手指上都有传感器。戴手套的人做手语时,这些动作就会被翻译成屏幕上的文字。不过,这款手套并不完美:它只能翻译美国手语字母表中的个别字母,但它仍然获得了大量的关注和赞扬。 世界各地都发明了类似类型的手套,但没有一种手套能完全翻译准确,达到到公开销售的标准。 在近几年,准确来说在2016年,两名本科生托马斯·普赖尔和纳维德·阿佐迪发明了一种手套,可以使用蓝牙将手语转换为文本或语音,并通过扬声器播放内容。这种名为SignAloud Gloves的手套受到了全国的关注。 尽管在技术上取得了这些进步,聋人群体和语言学家们至今尚未对手语手套的发明做出很好的反应。通常情况下,发明者不会咨询聋人群体的需求;相反,产品是基于听力正常的人的偏好设计的。聋人使用手语手套可以让听力正常的人更容易理解他们,但是如果角色调换,这些工具就不能发挥作用,无法改善交流,因为手套不能把听力正常的人说的话翻译成手语。 未来,开发者、设计师和工程师需要与聋人群体协作,了解他们在机器翻译方面的需求和期望。毕竟,发展这些技术应该像帮助听力正常的人一样帮助聋人用户。 幸运的是,手语机器翻译出现了一些令人兴奋的新进展。一些创新公司正在研发对听力正常的说话者和使用手语的聋人群体都有益的产品。 有两家公司发明了举世公认的著名产品: SignAll1.0是世界上第一款允许使用手语的聋人和听力正常的说话者通过自动的美国手语(ASL)翻译技术进行实时交流的产品。 聋人和听力正常的人各自使用自己的语言,通过使用美国手语和口语语言的屏幕进行交流。产品是怎么工作的?这个功能强大的系统有两个监控器,一个用于聋人用户,另一个用于听力正常的人。聋人用户必须戴上手套并在摄像机前做手语。然后,他们的手语会被翻译成文本,听力正常的用户可以阅读。之后,听力正常的人的回复会被自动语音识别系统转换成文本,聋人可以阅读。 SignAll系统可用于商业和教育领域。这项技术使聋人可以更好地找到工作,帮助公司为聋人客户提供更好的客户服务。 它还可以帮助朋友和家人进行交流。 KinTrans是一家达拉斯的初创公司,目前正在开发将手语转换为口语对话的机器翻译软件。 这项先进技术的工作原理是使用3D运动感应摄像机来观察一个人做手语时的手部和身体动作。然后,系统会翻译他们的手语,大声读出内容,并显示在连接的数字屏幕上。 在现实世界中,想象一个聋人走进一家商店,站在设备前做手语。机器可以翻译他们的语言,听力正常的人可以在屏幕上输入回复,回复会通过屏幕上的动画人物做手语反馈给聋人。这项技术很棒的一部分在于它对聋人群体和听力正常的人都有帮助。 KinTrans的技术服务于商场、机场、酒店和医院。目前正在与政府服务有关的领域以及迪拜的一家银行进行测试。该系统能以98%的正确率区分数千个美国和阿拉伯手语手势。未来,该技术将涵盖葡萄牙手语和印巴手语。 自20世纪70年代首次开发以来,手语机器翻译已有许多改进。值得庆幸的是,除了听力正常的人的需求之外,手语机器翻译最近的进步更多地关注了使用手语的聋人的需求。 随着科技的不断进步,人工智能辅助聋人和不懂手语的人之间进行互动将变得更加普遍。随着技术的提升,这种方式会更受聋人群体的欢迎,而且一旦呈现的内容准确一致,用户就可以信任它,有了这项技术,他们在日常生活中就会有更多的独立性,与他人交流也变得更加容易。这类技术还可以改变企业与聋人客户的交流方式,改善客户的购买体验。

以上中文文本为机器翻译,存在不同程度偏差和错误,请理解并参考英文原文阅读。

阅读原文