So, here is the source code for the application. You can download it from my iDisk by clicking here.
I don't know how long this will be up. It all depends on when I get the actual server running. This is just a temporary thing.
For those who don't want to open the source code, here is how you get a NSSpeechRecognizer Object to work.
- Instantiate your object
- Set the Delegate to "self" in the INIT method of your control, like so:
- [speechRecognizerObject setDelegate:SELF];
- Create a delegate method (Essential, just put in this method header in your controls .m file)
- - (void)speechRecognizer:(NSSpeechRecognizer *)sender didRecognizeCommand:(id)command{}
- Fill in the blanks between the {}'s
- As long as you have the delegate set to self, the object will look in the class where it is instantiate for that method. When a command is recognized, it will send it back in (id)command object.
- Just so you know, it will only recognize a given set of commands. To set these commands, you will need to pass in a array of commands. I use a NSMutableArray to handle this. You can send in the commands using the [speechRecognizerObject setCommands:arrayOfCommands];
I think I figured out through this demo app how to figure out if the mouse moves or not. It took me this application to fully realize how delegates worked in Objective-C.
1 comment:
Hi,
can this be used in iPhone also?
Regards,
ZaldzBugz
Post a Comment