Skip to main content
RAWEB

17.9 For each two-way voice communication web application which makes it possible to identify the activity of a speaker, it is possible to identify the activity of a signer. Is this rule respected?

Official methodology to test criterion 17.9

Test 1 (17.9.1)

  1. Activate the web application and launch a video call between the two devices.
  2. Initiate a spoken activity, and check that the second device has information to identify this activity (for example, the presence of a coloured halo around the thumbnail of the person being spoken to).
  3. If this is the case
    • look for the presence of a manual mechanism (e.g. a button) that would allow the person signing to indicate that they are signing;
    • if not, perform gestures in front of the camera (see note) and check that information is automatically displayed to identify this visual activity.
  4. If this is the case, the test is validated.

Note: In communication applications, the identification of an oral activity is not based on the identification of an intelligible verbal message (a word or sentence, for example) but solely on the identification of a sound (a noise, for example). In this way, visual activity, even if it does not correspond to an element that can be understood in sign language, could be automatically detected by this application and would therefore serve as a mechanism for identifying the activity of a person who signs. It is therefore possible to test by performing gestures even if they do not correspond to an element of meaning in sign language.