Part 3/8:
Researchers from the Lon Health Medical Center at New York University collaborated with AI specialists to conduct a study involving the creation of 150,000 fake medical documents. These were mixed into a dataset used to train AI systems. The researchers then prompted chatbots with medical questions and analyzed their responses. The results were alarmingly clear—when even 0% of falsified data was employed, all chatbots provided incorrect answers. Furthermore, when only 0.001% of the data was false, approximately 7% of the responses were still erroneous. This shows that just a small amount of inaccuracy can significantly confuse AI mechanisms.