nightofgrim Posted January 3, 2018 Share Posted January 3, 2018 To bump up VR immersion it would be wicked awesome if calling out your companions name would get their attention and put you into command mode for them. To expand on this, initiating dialog with NPCs using your voice would also be really nice. Link to comment Share on other sites More sharing options...
montky Posted January 4, 2018 Share Posted January 4, 2018 I'm intrigued by this kind of thing,especially in the context ofAGI-voiced companions re NPC SPECIAL for all overhaul. The "Talking Heads" idea is a great one.Voice-to-TextString notion, that would go very nicely with the AGI-Games Master project. I've seen similar things implemented in games such as ARMA, Mass Effect etc.where, you say a phrase, and the NPC's will respond to that command. for Fallout,I've always wanted to see "Talking Heads" revisited.each NPC has a unique regionalized voice, and can 'talk' to you, based on the AGI-Games Master's"Dialogue Generating Routine"... to that end, I'm collabing with some sound-engineers onopenGNU and CCA 3 SA-Rd-Remix community sound resources,for vocalizations and regional audio dialogue replacement AGI interface.the immediate aim of that project, is to 'live-real-time "Universal Translator" / "transliterate" projects,so as to add to verisimilitude and 'emotional investment in narrative'. so, like Fable3 style, I wonder if a proper AGI interface could be developed...thus I'm watching stuff like ThisWeekInTech, KNOWM, OpenCog, Journal of AGI etc,and some of the ideas there and at BethesdaTV etcare awesome stuff.---- I digress,to achieve what you envisage, it might already be possiblethanks to AGIXML backend, or 'disability accessibility support interfaces".that is a pluggin for steam etc. |Vocal command| -> matching to | onEvent #commandHashingString |^ the above will cause the menu to play from the on-Event.there is a read-write delay, as the sound-file is then parsed inside of a couple of seconds,then, the corresponding command is output. It's also a global... so, expect a few frames to drop hehe. then later, Voice-to-Text, text input-string and MCMC etc of string response matching.so, instead of 'manually typing' the text-string, you're voice-to-text inputting instead(cue meme of Montgomery Scott "Keyboard, how quaint".) the microphone might need to be 'push-to-talk',then its a Chomsky-Schutzenberger matching routine for the phoneme,to the "onEvent", which could be AHaH/AGIXML from input -> microphone... however, there is a darkside to this;by dint of a living EULA, your voice metadata might be captured,and/or a liability in identity-security at a future point... cue Sheldon Allman "Big Bother is Watching Who?" Link to comment Share on other sites More sharing options...
Recommended Posts