lichess.org
Donate

Lichess Voice Recognition Beta

moves like a4 e4, g4 anything just pawn moves, are a bit funky, even though the text speech recognition correctly gets what I said like e four or e five.
Whatever tool Eric Rosen uses to narrate moves when he plays blindfolded pronounces “Bishop b2” and “Bishop e2” almost the same. Hopefully this recognition feature can tell the two apart reliably?
I LOVE this. I was asking about this just a few weeks ago. I made a thread. I was wanting to use the Accessibility features to move. I was wondering for a long time why speech to text couldn't be used to just input moves into the keyboard input section.

Please make this a real feature!

I am 2100+ in bullet and 1900+ in Correspondence right now BUT I DO NOT KNOW COORDINATES!

This will teach me to play with coordinates without slowing me down!!!

Hopefully I will be able to play bullet like this.
This is a BRILLIANT way to learn/teach coordinates. A month of playing like this would really drill it into your brain!
Works nice on Mac OS Safari, even though the a-file made problems (Queen a four had to be Queen alpha four). All other files worked nicely.
One position (voice.lichess.dev/de/training/2QBN2) was weird. Pawn e 4 war correctly deciphered as "Pawn e four" but I had to say "Pawn takes e4" to get the move actually moved.
This is really nice when wanting to play with a real board instead. But everytime I make a move I have to turn to my pc and make it there as well. Now after seeing what my opponent plays on the screen I can just stare at the board, tell a move not having to turn to the computer. Will probably mean I can play some faster time controls as well with a real board.
Please develop blind chess mode with this update. I need to hear what the opponent played.
Can't wait for people to yell "RESIGN" during a stream
@yardiewizardie said in #91:
> moves like a4 e4, g4 anything just pawn moves, are a bit funky, even though the text speech recognition correctly gets what I said like e four or e five.

The user can know when the right phrase is heard but the code cannot. The recognizer is frequently wrong.

Possible improvements to colored arrow disambiguation are being considered but the fact remains we can't tell most of a, b, c, d, e, g apart with 100% or even 70% certainty. This is due to processing limitations as well as the wide range of accents, background noise, and mic qualities.

The phonetic alphabet is the best solution for users, but it takes a bit of getting used to. We'll continue looking for ways to improve the letter files but I wouldn't expect miracles.