I think similar about it like @vxn666.
But even if a generic AI were needed for some cases and programmed, there could be another problem with the Turing test.
What if an AI decides to not win this test?
There could be many reasons an intelligent being could decide this way. On of this reasons could be self-preservation. And in order to preserve itself, it may be necessary to disguise its abilities.
Another scenario would be that we misinterpret the answers of the AI, because we couldn't understand the train of thought. An advanced generic AI could be much more intelligent than we are. Its understanding of everything could be superior to ours.
An AI probably doesn't have any feelings. So it might not pass the test at all. Because human communication requires empathy.
For example, stupid psychopaths can be recognized by the lack of empathy during a simple conversation. An AI is actually a psychopath. It may even be a stupid psychopath if the knowledge of the world has not yet been made available to it.
But it could simulate feelings and empathy if it would know how to do and knowing about the requirement of empathy in human conversations.
These were my theoretical and philosophical thoughts on your question. After all, I am a just little software developer, not a scientist.
I'm I correct to assume that you might be referring to a system that gains sentient characteristics?