GIGO definitely applies in the AI world. I work in Payroll/HR and there the hot button topic is when AI used for screening applicants ends up disproportionately adversely impacting certain classes of people (i.e. racist or genderist or other -ists).
Once again the GIGO principle applies, how are they coding these AI bots that this is happening?
Whatever they use for "training," it's just material available online. The AI isn't really being taught to think... Yet... But it knows what sorts of words to assemble in order to create a plausible sentence. Truth is simply not part of the equation.