You are viewing a single comment's thread from:

RE: LeoThread 2024-09-15 01:40

in LeoFinance2 months ago

OpenAI o1 - Questoinable Empathy

OpenAI o1 came out just in time for me to add it to my 2024 Q3 benchmarks on AI empathy (to be published next week). The results for o1 were at once encouraging and concerning. O1 has an astonishing ability to put aside the typical LLM focus on facts and systems and focus on feelings and emotions when directed to do so. It also has a rather alarming propensity to provide inconsistent and illogical reasons for its answers.

#openai #technology #chatbot #agi

Sort:  

Developing Empathetic LLMs
Although the amount of funding pales in comparison to other areas of AI, over $1.5 billion has been invested in companies like Hume (proprietary LLM), Inflection AI (Pi.ai proprietary LLM), and BambuAI (commercial LLM) in order to develop empathetic AIs.

My partners and I have also put considerable effort into this area and achieved rather remarkable results through the selection of the right underlying commercial model (e.g., Llama, Claude, Gemini, Mistral, etc), prompt engineering, RAG, fine-tuning, and deep research into empathy.

This work has been critical to better understanding and evaluating LLMs for empathy. Our own LLM, Emy (not commercialized, but part of a study at the University of Houston), will be included in next week’s benchmarks.

Conclusion
Jonathan Haidt author of The Righteous Mind said, “We were never designed to listen to reason. When you ask people moral questions, time their responses, and scan their brains, their answers, and brain activation patterns indicate that they reach conclusions quickly and produce reasons later only to justify what they’ve decided.” There is also evidence this is true for non-moral decisions.

O1 is undoubtedly a leap forward in power. And, as many people have rightly said, we need to be careful about the use of LLMs until they can explain themselves, perhaps even if they sometimes just make them up as humans may do. I hope that justifications don’t become the “advanced” AI equivalent of the current generation’s hallucinations and fabrications (something humans also do). However, reasons should at least be consistent with the statement being made … although contemporary politics seems to throw that out the window too!