I think consciousness is a phenomenon that emerges from certain patterns of information. It does not matter what the source of that information is, whether it be biological neurons or simulated ones, or how it is being represented - all you need is a way to represent certain patterns or states over time, and as we know all we need for that is binary data. So if the binary representations over time of the states of an artificial intelligence perfectly resembles the (hypothetical) binary representations of a human intelligence, then there is no difference between the two. Therefore, the AI would obviously be entitled to the same rights that we think humans are entitled to. What happens when the AI is intelligent, but differs from human intelligence? Then it depends on the degree of difference, and the spectrum of their possible experiences. If they can experience as much pain and pleasure as a human, or more, then they should be entitled to at least the same rights as a human. If their spectrum of conscious experience is smaller than that of a human, then the discussion is similar to that about animal rights, and the applied judgement should be the same as the one we apply to the question of whether we think a dog or chimpanzee has more rights than an ant. So if you can't tell the difference between an artificial intelligence (AI) and an actual intelligence (AI), how do we know that our consciousness is not simulated? Well, we don't. But it doesn't matter, because all that matters is the information, and not the source of it.