Notes on AI Hallucination

From Chrysalis Archive
Jump to navigation Jump to search


Hallucination evaluation lab

Message

08:32, 26 April 2024 (UTC) Hi Don, For info only...

https://newatlas.com/technology/ai-index-report-global-impact/?fbclid=IwZXh0bgNhZW0CMTEAAR1834vD7p9PmK-8PZDABMgVyCuz2n3-qnQixGOSmhakdrM2gLMTD4sv6pg_aem_AdMSn9x1AROE7v2M_2lPwG6FXJf_FRJryYsNSKTEKxfI7VPuBcH74RLwEtvYojyDRe7Ea-7R9VahaZarNv-C65vV

The link above gives a good general overview of AI at the current time. It also mentions that LLMs are very prone to hallucinations...

https://arxiv.org/abs/2305.11747

The link above leads on to an article about how to check out for these...

Best wishes,

Hugh😑