Working on a separate project I did a few experiments with JSON response. For me the response was cut anyway.
I'll give it a few more tries to see if it's better 👍
Working on a separate project I did a few experiments with JSON response. For me the response was cut anyway.
I'll give it a few more tries to see if it's better 👍
The GPT-3.5 Turbo model seemed to do it better than the GPT-4 model in my experiments.
Yeah, we use the 3.5.
The results for the prompts we created are already quite good. And we can slightly reduce our monthly bills. BTW we don't know yet how many users will actually use this new feature. So we'll have to figure out if the cost is sustainable for us :D