Jump to content

Voice control with VR to get companions attention, "Codsworth!"


nightofgrim

Recommended Posts

I'm intrigued by this kind of thing,

especially in the context of

AGI-voiced companions re NPC SPECIAL for all overhaul.

 

The "Talking Heads" idea is a great one.

Voice-to-TextString notion, that would go very nicely with the AGI-Games Master project.

 

 

 

I've seen similar things implemented in games such as ARMA, Mass Effect etc.

where, you say a phrase, and the NPC's will respond to that command.

 

for Fallout,

I've always wanted to see "Talking Heads" revisited.

each NPC has a unique regionalized voice, and can 'talk' to you, based on the AGI-Games Master's

"Dialogue Generating Routine"...

 

to that end, I'm collabing with some sound-engineers on

openGNU and CCA 3 SA-Rd-Remix community sound resources,

for vocalizations and regional audio dialogue replacement AGI interface.

the immediate aim of that project, is to 'live-real-time "Universal Translator" / "transliterate" projects,

so as to add to verisimilitude and 'emotional investment in narrative'.

 

so, like Fable3 style, I wonder if a proper AGI interface could be developed...

thus I'm watching stuff like ThisWeekInTech, KNOWM, OpenCog, Journal of AGI etc,

and some of the ideas there and at BethesdaTV etc

are awesome stuff.

----

 

I digress,

to achieve what you envisage, it might already be possible

thanks to AGIXML backend, or 'disability accessibility support interfaces".

that is a pluggin for steam etc.

 

|Vocal command| -> matching to | onEvent #commandHashingString |

^ the above will cause the menu to play from the on-Event.

there is a read-write delay, as the sound-file is then parsed inside of a couple of seconds,

then, the corresponding command is output.

 

It's also a global... so, expect a few frames to drop hehe.

 

then later, Voice-to-Text, text input-string and MCMC etc of string response matching.

so, instead of 'manually typing' the text-string, you're voice-to-text inputting instead

(cue meme of Montgomery Scott "Keyboard, how quaint".)

 

the microphone might need to be 'push-to-talk',

then its a Chomsky-Schutzenberger matching routine for the phoneme,

to the "onEvent", which could be AHaH/AGIXML from input -> microphone...

 

 

 

 

however, there is a darkside to this;

by dint of a living EULA, your voice metadata might be captured,

and/or a liability in identity-security at a future point...

 

cue Sheldon Allman "Big Bother is Watching Who?"

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...