GNOME Bugzilla – Bug 520595
Orca is far too "chatty" for persons with learning disabilities
Last modified: 2011-11-24 10:27:29 UTC
One of the things Mats Lundalv identified in his "A spotlight on the need for basic reading and writing support in GNOME" (http://mail.gnome.org/archives/gnome-accessibility-list/2008-February/msg00055.html) was an option to eliminate the speaking of context: > Options to shut the speech up and just provide the support needed Having such a verbosity option would not only be beneficial for users with learning disabilities but would also be beneficial for users with visual impairments, in particular those with a high degree of usable vision (i.e. they know the context/locusOfFocus but can maximize efficiency and minimize eye fatigue by using synthesized speech for sustained reading and/or text that just isn't quite big enough). As I recall, Mike indicated at yesterday's team meeting that he wanted to take a look at and possibly improve on Orca's use of verbosity settings. I think this should be included as part of that consideration/improvement. Thoughts?
First coarse pass at GNOME 2.24 planning.
Created attachment 112285 [details] [review] Revision #1. There's now a new radio button called "No Context" for the Speech Verbosity radio button group. Might need a better name. Seems to work nicely.
After rethinking this, I wonder if the "No Context" maybe should be a separate checkbox so you can still set the speech verbosity to either Brief or Verbose. Thoughts?
Created attachment 112342 [details] [review] Revision #2. This version using a separate check box for indicating whether the context should be spoken. This now also allows the user to set either brief or verbose verbosity. I think I like this one much better.
Hey Rich. I'm a little (and possibly considerably) confused about the functionality, so let me explain what I think this RFE is all about, and what I'm seeing first. Once you can get me on the same page as you, then I'll address your observations because I think they're good, but I'm viewing them from my perspective of what this bug means and my comments might be irrelevant once I'm no longer confused. :-) So.... In my mind what is being asked for here is an option to just speak the label/text of the object with focus -- and only that label/text. Thus there are three things that should not be spoken if speech context is unchecked: 1. Role information 2. State information 3. Context (in our traditional sense of the word) As an example, I got into the Orca Preferences dialog and moved focus to the Show Orca main window which I happen to have unchecked. Then I Alt+Tabbed to another window and back to the Preferences dialog. I'd expect to hear "Show Orca main window" I hear "Orca Preferences, Show Orca main window checkbox unchecked" Here's my thinking: If I have *that* much vision (i.e. maybe I just have a mild visual impairment and/or a learning disability), I can tell that the thing is a checkbox and that it's unchecked. My problem is trying to make out (or cognitively process) the text/label for that checkbox. Hopefully I'm making some modicum of sense. :-) Thanks for your help! (And yes, I did take the Focalin today -- and I slept too! ;-) ;-) )
I took "eliminate speech context" to mean don't speak the speech context. I.e. getSpeechContext() returns nothing. Will also saying that "this should be an easy one" suggested that that was what was wanted. Removing role and state informations would be a lot harder. Not impossible. Just not trivial. Perhaps it's time for Mike or Will to clarify what they would like to see done here.
(In reply to comment #6) > I took "eliminate speech context" to mean don't speak the > speech context. I.e. getSpeechContext() returns nothing. Well...that's how I took it, too. But, perhaps we should ask Mats what he really meant by "options to shut the speech up and just provide the support needed" instead of guessing about this. I'll forward this to him to get his thoughts.
Here's some information from Mats. It really seems to be geared towards only reading the visible text of interest and providing no additional information: "What I'm after is basically to provide a basic mode for Orca where you don't have any automatic context information spoken - a silent Orca that just speaks if: a) you type text and have keyboard input echoing enabled (and here I would like to have a new option for sentence echoing after punctuation (.?!:;) and Enter (forced new line)) b) you use the existing speech commands like the current "read all", and "speak previous/current/next line/word/char" (Here I would also like to have an additional "speak previous/current/next sentence" command. It would also be good to have a "read highlighted text", e.g. by reading when copying the highlighted text to the clipboard - Ctrl+C. The currently existing speak highlighted word on double-click would also be fine - if it just spoke the word, but now it seems to first speak context information and then the word - I'm not quite clear over how it behaves) This way to use Orca would basically be used by users with a reading impairment (and no or only moderate visual impairment) when reading or typing text in a text editor (eg OpenOffice.org Writer, ABI Word etc.). In addition to this, these users would benefit from using the screen-reading functionality of Orca to just get support for accessing the (non editable) text content in web pages etc. - current speech commands could be used here too no context information (window focus, state etc.) is spoken, but only text content. (A future option would be reading text under the mouse cursor - as indicated in my original suggestion) A very good option would be to provide a package setting (a "reading support mode" sort of) so that you don't have to go in and check or un-check a whole lot of different settings individually in different tabs to achieve this basic mode of operation." Part of me is toying with the notion of creating a grammar/language to express what should be made in presentations for speech and braille. For example, we might have something like this format string for a checkbox: "%c %l %s". This would mean "present the context changes, the label, and the state." This is something I experimented with early on, but chose not to do because we didn't understand the verbosity space well enough. The thought here, however, was to have different profiles that could be in place. For example, the "verbose" profile would have a whole set of format strings for verbose output, a perhaps a "text only" profile would have a separate set of strings for what Mats is describing above. Before implementing this stuff, though, I think we should develop a new Persona (http://live.gnome.org/Orca/Specification/Personas) for this type of user and work to better understand the users requirements. I can try to enlist Mats to help us with this, because we really need end user direction here.
This is all great stuff, but it's clear that we still have a bit of research and planning to do. Moving to a target of FUTURE for now to get it off my immediate radar. Thanks.
In light of the discussion following the patch, Joanie's comments in comment #5 and Mats' comments in comment #8, we have a lot more work to do.
The work in bug #570658 should hopefully support most of this RFE.
Ale, it occurs to me that this bug is related to -- and might be easily solvable as part of -- some of the other work being done towards the Andalucía bug fixes. Therefore, I'm adding this to your metabug whilst I'm thinking about it so I don't forget later. Hope that's ok with you....
Good for me. Thanks!
Created attachment 167906 [details] [review] Proposed patch - everything but the GUI * Adds a new setting (onlySpeakDisplayedText) * In theory does what the new setting claims :-) * GUI will follow in a separate patch * Not yet regression tested Attaching now just in case bugzilla goes down for hours and hours again. :-/
Marking as depends on 543157 because the patch I just attached assumes that you have that patch/fix.
Created attachment 167909 [details] [review] Proposed GUI changes As you might have guessed, this patch is intended to be applied on top of the 'Proposed patch - everything but the GUI'. The changes in orca-setup.ui are greater than one might expect. The reason is that currently Orca has a table of checkboxes at the bottom of the Speech page. Half of those checkboxes are for items which will not be spoken if the new setting is enabled/checked. Therefore the thing to do IMHO is toggle the sensitivity of those items as a group. And to do that, I needed to turn the table into an hbox with two vboxes. This is pretty thoroughly functionally tested and seems to be doing the right thing. I'll run the regression tests now....
This passes the regression tests. Though we should still write a test or two to test the change made here. In the meantime.... Assuming there is nothing glaringly wrong with the proposed patches.... I think the thing to do once the fix for bug 543157 has been committed to master, is create a single patch for this bug and solicit user feedback and testing. (Sooner rather than later if we are planning on including this in 2.32. Otherwise, it can wait.)
Created attachment 167947 [details] [review] Single patch made up of the previous two I've combined the two original patches to ease user testing.
Hi, Okay I'm running the patch on top of latest Orca from master (including the patch to add the system voice). Looks good thus far. I've played around with the checkbox for only speaking displayed text both checked and unchecked and Orca seems to be behaving as expected. I was wondering though... Shouldn't Orca still provide feedback when you do something like Orca + F12 to toggle browsing mode? Or basically any key combination that toggles a setting? Surely the user still needs feedback on that? What if the user accidentally hit Orca + h and Orca goes into learn mode? Suddently the keyboard wouldn't appear to be working... But perhaps this is something that can be worked on after the commit. Orca is working okay with the patch.
Thanks Paul. I really appreciate your quick responses to these calls for testing. As for the issue you mention. I am very much aware of it, but the question is how to address it. I've been told we shouldn't speak text which is not on the screen. The text you mention ain't on the screen. So the other solution would be to identify which keybindings are likely not to be needed or wanted (and/or will be a source of confusion) for users who are sighted by have a learning disability. And provide a means by which those users can disable those keybindings. It's still something I'm debating. But basically, we don't seem to be getting input from the users in question because Orca is too chatty (amongst other things). <shrugs>
Comment on attachment 167947 [details] [review] Single patch made up of the previous two http://git.gnome.org/browse/orca/commit/?id=a85356554713d5e5cfbb8cea223372d161bddf4a
Regarding how best to proceed with respect to the issue Paul pointed out (thanks again!), I posted something to OATS-sig: http://lists.becta.org.uk/pipermail/oats-sig/2010-August/001986.html
Below is Mats' response. (The 'this' he is referring to is the speaking of command feedback. i.e. stuff which is not on the screen.) Based on it, I will reinstate speaking of command feedback. BTW, we actually already have a sentence echo in Orca. ============ As far as I understand this will not make a huge difference for the new intended users. The important thing is that it should be easy put ORCA in a basic "reading support for seeing users mode" - ideally by just ticking one major setting for this - which turns off the automatic reading of redundant (for the seeing user) navigation and state information. There is no big problem still having some (or even all) special keyboard shortcuts for spoken information - such as time and date (this may in fact be very useful for some individuals) - as long as unwanted speech is not presented to and distracting the user in the basic writing and reading activities. The first additional thing I would love to see would be the addition of the "read sentence" option - both as a keyboard input echo, and as a reading command option - that I suggested in my "spotlight" post. I'm sure this would also be much appreciated by some VI users as well. (Another issue that i just mentioned in the "spotlight" post, is that of higher quality speech. It would be a major step for users in need of reading/writing support in GNOME if an option of adding alternative - including some commercial - voices was made available - in particular for users reading and writing in other languages than English, and especially the smaller languages where it's hard to find acceptable quality free voices.) But first things first ;-)
I have just committed a two-line change to master which causes all messages being presented by speakMessage() or presentMessage() to be spoken using the system voice. This restores all the messages that were not being spoken if the user had opted to only have displayed text presented. (i.e. it should not change the user experience in any way for our "traditional" users) I hope to see further involvement from the LD community. Based on their feedback we can refine what does and does not get presented by {speak,present}Message(). http://git.gnome.org/browse/orca/commit/?id=09d363a8b5e41e116f0c0f7d62a2d905f3ab3df8
*** Bug 578439 has been marked as a duplicate of this bug. ***