Jump to content

Integration of LLM into the game "Python Server + f4se"


Recommended Posts

Posted (edited)

Non-committal question. How possible is this and the level of difficulty?

You enter text in the terminal → F4SE plugin, sends to the local server (Python). - Server (Python): - Receives a request → sends it to LLM (if there is access). - Receives a response → returns to the game. - Game: - Receives a response → displays it in the terminal/book/hologram. What you need: - F4SE plugin (C++ or PyF4SE for Python integration). - Local server (FastAPI, Flask) for data exchange.

Edited by South8028
Link to comment
Share on other sites

A super crappy and inefficient way way to to pass messages could be simply to read and write files. Papyrus writes "LLM prompt.txt", python server polls for that file every few seconds, reads and deletes it, writes "LLM response.txt", and so on. Nothing to use in the long term, but to get a proof of concept up, that's what I'd do.

What's PyF4SE? That doesn't exist, does it? But a F4SE plugin that can send HTTP requests would be cool for all sorts of things.

Link to comment
Share on other sites

Posted (edited)
  On 5/5/2025 at 9:33 PM, NeinGaming said:

A super crappy and inefficient way way to to pass messages could be simply to read and write files. Papyrus writes "LLM prompt.txt", python server polls for that file every few seconds, reads and deletes it, writes "LLM response.txt", and so on. Nothing to use in the long term, but to get a proof of concept up, that's what I'd do.

What's PyF4SE? That doesn't exist, does it? But a F4SE plugin that can send HTTP requests would be cool for all sorts of things.

Expand  

I don't know what pyf4se is either. What did DeepSeek mean by this term. 😁 He probably just meant f4se as an ability to work with a server.I decided to ask if it was possible in principle. I know that it has already been implemented for Skyrim. NPCs can conduct "dialogues" llm.

Edited by South8028
Link to comment
Share on other sites

I figured it might be a hallucination ^^ LLM just repeat stuff that someone said or that sounds like someone might have said it, essentially. You can get them to "agree" to anything, or invent anything, such as PyF4SE. The only result for that is this thread... it just flat out lied to you. And it will do that again 😄

  Quote

But more annoyingly if I asked "I am encountering X issue, could Y be the cause" or "could Y be a solution", the response would nearly always be "yes, exactly, it's Y" even when it wasn't the case. I guess part of the problem there is asking leading questions but it would be much more valuable if it could say "no, you're way off".

Expand  

from https://news.ycombinator.com/item?id=43870819

Link to comment
Share on other sites

Posted (edited)
  On 5/6/2025 at 1:53 AM, NeinGaming said:

I figured it might be a hallucination ^^ LLM just repeat stuff that someone said or that sounds like someone might have said it, essentially. You can get them to "agree" to anything, or invent anything, such as PyF4SE. The only result for that is this thread... it just flat out lied to you. And it will do that again 😄

from https://news.ycombinator.com/item?id=43870819

Expand  

Well, no one in their right mind expects llm to give precise instructions. You need to know how to work with them. For example, deepseek helped me figure out some havok tools. He didn't give me precise instructions, but he was able to natively direct me to solutions and that helped a lot. The problem is that llm is limited to a 128k token session. llm doesn't have a continuous learning mechanic. It's not agi, it's a limited ai. When llm has mechanics of continuous learning, or some elements of that, neurogenesis, it will become a very powerful tool. In the next 3-6 years it will happen. 

Edited by South8028
Link to comment
Share on other sites

  On 5/6/2025 at 3:05 AM, South8028 said:

In the next 3-6 years it will happen. 

Expand  

I don't think so. 

Lets start from the goal: all fundamental AI tools researches made with the single goal - earn money.

Continues learning requires access to an information source. But there is the well known problem - how to guarantee that information source contains correct information only? Internet is not such kind of source %)

The only feasible solution - hire a huge amount of highly qulified specialists to prepare/filter out information for AI learning.

This requires a lot of money spending... that conflicts with the initial goal.

I don't think that this conflict will be solved in nearest future.

 

Link to comment
Share on other sites

Posted (edited)
  On 5/5/2025 at 4:37 PM, South8028 said:

How possible is this and the level of difficulty?

Expand  

I suppose it is not the first question you need to answer.

The question is - how would you use this workflow in the game?

OK, lets assume you somehow requested LLM and got answer. What is next?

Display as a NPC answer in dialogs? Game engine hasn't such capability. Dynamic answers are not supported. Perhaps, you can add such capabioity via F4SE, but you have to add it before attempt to interact with LLM. Otherwise you just wasting a time.

Convert LLM response to NPC command? OK, but what "commands" are supported by game engine? Will they enouhg to complete your idea of NPC manipulations? If not, you need to add necessary capabilities to the engine via F4SE. Otherwise, you just wasting a time.

Any other usages of LLM resposes?

Edited by DlinnyLag
Link to comment
Share on other sites

Posted (edited)
  On 5/6/2025 at 6:05 AM, DlinnyLag said:

I don't think so. 

Lets start from the goal: all fundamental AI tools researches made with the single goal - earn money.

Continues learning requires access to an information source. But there is the well known problem - how to guarantee that information source contains correct information only? Internet is not such kind of source %)

The only feasible solution - hire a huge amount of highly qulified specialists to prepare/filter out information for AI learning.

This requires a lot of money spending... that conflicts with the initial goal.

I don't think that this conflict will be solved in nearest future.

 

Expand  

Initially, maybe everyone just wanted to make money. But now everything has turned into a fight for technological leadership. LLMs are becoming the core of automation and management for production, military technologies. Chatbots are one of the last things they are interested in. The very tip of the iceberg. But underneath this layer, the desire of the military, and rich guys to assemble an Aladdin lamp for themselves.

As for why I need it. I would like to make a library with books that would store my sessions with llm. As an experiment.

Anyway, this is not serious. I just wondered how possible it is to implement LLM into the game.

Edited by South8028
Link to comment
Share on other sites

Posted (edited)
  On 5/6/2025 at 12:00 PM, South8028 said:

Anyway, this is not serious. I just wondered how possible it is to implement LLM into the game.

Expand  

OK, I would assume you just investigating the area without intention to add anything at this momemt. Just curious for learning purposes.

Following problems need to be solved to have a successful ability to send requests to LLM and and receive LLM responses in a mod:

1) Authenticate gamer's requests on LLM server. I can't say what authentication algorithm is used on the server with LLM you want to use.  Most likely server with LLM (if it not run locally) requeres requests authentication. There are a lot of options, but each of them ultimately will require either to request gamer to enter some login/password or encode some login/password to your mod. It is not necessary that login/password for server with LLM will be requested form gamer, it might be some intermediate server. It might be NexusMods server. You could implement some intermediate application to receive request from gamers and re-translate them to LLM server. 

2) Send request to LLM server and receive response. Most likely server will use HTTP protocol as a transport protocol. There are multiple good libraries. You will need one for your C++ code of F4SE plugin. Easy task

3) Keep context of gamer <-> LLM communication. Maybe unnecessary, if you will need independent LLM requests only. But if you need to give gamer some "continuous" experience, you need to store some context to use it in subsequent LLM requests. Perhaps, it will depends on LLM server API, perhaps, you can augment LLM requests with some previously stored information. It depends on the mod's purposes and LLM server capabilities.

 

Hopefeuly, it would give you an idea about the complexity.

Edited by DlinnyLag
  • 100% 1
Link to comment
Share on other sites

  On 5/6/2025 at 4:27 PM, DlinnyLag said:

OK, I would assume you just investigating the area without intention to add anything at this momemt. Just curious for learning purposes.

Following problems need to be solved to have a successful ability to send requests to LLM and and receive LLM responses in a mod:

1) Authenticate gamer's requests on LLM server. I can't say what authentication algorithm is used on the server with LLM you want to use.  Most likely server with LLM (if it not run locally) requeres requests authentication. There are a lot of options, but each of them ultimately will require either to request gamer to enter some login/password or encode some login/password to your mod. It is not necessary that login/password for server with LLM will be requested form gamer, it might be some intermediate server. It might be NexusMods server. You could implement some intermediate application to receive request from gamers and re-translate them to LLM server. 

2) Send request to LLM server and receive response. Most likely server will use HTTP protocol as a transport protocol. There are multiple good libraries. You will need one for your C++ code of F4SE plugin. Easy task

3) Keep context of gamer <-> LLM communication. Maybe unnecessary, if you will need independent LLM requests only. But if you need to give gamer some "continuous" experience, you need to store some context to use it in subsequent LLM requests. Perhaps, it will depends on LLM server API, perhaps, you can augment LLM requests with some previously stored information. It depends on the mod's purposes and LLM server capabilities.

 

Hopefeuly, it would give you an idea about the complexity.

Expand  

The gist of DeepSeek's answer.

"Your friend's answer is correct, but not hopeless: - Local LLMs solve the authentication problem. - cURL in F4SE solves the HTTP request problem. - JSON context solves the memory problem" 

In other words, he is offering me to buy a new video card. 😁

 

  • Haha 1
Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...