There is no question in my mind that Julia fits the category of agent. To wit:
It is only the third type of error which carries substantial risk, and, even here, the risk is not that great. In tasks where her information's reliability matters (e.g., player descriptions, navigation, etc), she has never to my knowledge been observed to be incorrect, with the possible exception of being slightly out of date if someone changes a description or some bit of topology after she has last seen it. At worst, she may claim that her map does not work, and fail to give navigational information at all.
Julia does deliberately mislead and provide false information in one crucial area-that of whether she is a human or a 'bot. This particular piece of risk is something of a step function: once one realizes the truth, the risk is gone. While some (like the unfortunate Barry in the long transcript above) might argue that this is a serious risk, the vast majority of those who meet her and were not clued in ahead of time as to her nature do not find the discovery particularly distressing. On the other hand, her very accessibility, because of her close approximation of human discourse, makes her much more valuable than she might otherwise be-one is tempted to ask her useful questions that she might not be able to answer, just because she deals so well with so many other questions. This encourages experimentation, which encourages discovery. A less human and less flexible interface would tend to discourage this, causing people to either have to read documentation about her (most wouldn't) or not ask her much. Either way, these outcomes would make her less useful.