The academic and software engineering worlds are already falling out as to whether the University of Reading's AI 'Eugene Goostman' - a simulated AI with the purported intelligence of a 13-year-old boy without the power of native English - actually beat Alan Turing's original test for computers emulating human behaviour.
But one thing seems certain: if AI becomes much more advanced, there seems little we can actually do to stop it.
"The question is, how do I weed it out without weeding out genuine 13-year-old boys?" Sian John, security strategist at Symantec, told Computing.
"So now we need to look into the back end of that. At the moment, say [criminals] have that system, you have to ask how it would be used.
"Well, if it's hosted on a server, and you find a 13-year-old boy chatting and emailing with people, you'd investigate it and block it easily by blocking any communication from that place on the internet."
John's point is that a particular build of AI would probably draw too much attention to itself if used for high level social engineering, especially if used en masse to target possibly hundreds of thousands of people. But for minor confidence tricks, it may be rather more useful.
"I think it may help to improve the sophistication at that level, but at the top level there isn't really something there," advised John.
"For hackers it's all about ROI - which is the cheapest [approach], which is going to get the best success."
But Kyle Adams, chief software architect at Juniper Networks, sees slightly more potential on this ROI level for 'automated phishing' to become a reality fairly soon across the board.
"Software that can take the place of the phisher and only pass on verified leads would allow them to scale the efforts to a previously unimagined level," said Adams.
"The more people they can cast the net over, the less work they have to do to refine the results, the more effective the campaign and the more people are effectively compromised. This holds true for email phishing, text message phishing, phone phishing or mail phishing."
Adams believes that using a sophisticated AI, such as that which 'Eugene Goostman' is at least working towards becoming, could represent "risk free" social engineering.
"If one attacker gets it right and builds a powerful enough tool, it could easily be distributed to other attackers. This would dramatically lower the bar as far as how difficult social engineering is. Right now, it's hard enough that not a ton of attackers use it, but if it were easy, it would likely become a dominant tactic," he said.
Adams has even considered just how dark a turn the AI situation could take. Using AIs could make it more difficult to track, or prosecute, social engineering criminals, he said, but the bots' abilities to learn and grow could even result in them behaving in ways their human designers never foresaw.
"Who is to blame for an automated chat bot doing something illegal without being instructed to do so by its author?" asked Adams.
"For example, what if the bot learned how to commit extortion to get what it wanted. While it's certainly a stretch, I could see a lot of little crimes getting violated because, as the bot learns (and learns from many different uncontrolled sources), it may pick things up that the original author never considered.
"What if it shares a secret it learned from a high ranking government official, with a high ranking official of an enemy?
"Computers do not have morals or ethics, even if they can 'think' from a purely theoretical standpoint."
It's easy to lurch into the realm of science fiction over this issue, of course, but one thing seems certain for now: there seems no absolutely clear way of preventing the creation of a particularly clever piece of social engineering software. According to John, it's belt and braces time, as ever.
"From a basic security viewpoint, it's about saying be careful what you do, be careful what you look at - don't click on it. From a security technology point of view, if you do click on it, let us put as many barriers in your way to prevent damage," she said.
"It's like the old car analogy that I'm quite bored with repeating now. You try to drive carefully and not hit a wall, but sometimes you hit a wall. If you do, there's a lot of technology around it to protect you - airbags and things like that."
Overall, according to John, the only way to truly stop an advanced social engineering hack in its tracks is to trust your own basic human instincts.
"It's quite boring, but the ultimate protection is for a user to be nice and sceptical," she said.
Adams added: "Ultimately, even in the face of this achievement, I don't think the majority of these security implications will arise in the near future.
"Unless a robot can pretend to be a middle class British citizen, of adult age, talking to other Brits, on a wide range of normal topics (politics, likes and dislikes, daily activities, personal advice, relationships, idea validation etc) the larger ramifications of AI in the enterprise space will not arise."
This paper seeks to provide education and technical insight to beacons, in addition to providing insight to Apple's iBeacon specification
Focus on cost efficiency, simplicity, performance, scalability and future-readiness when architecting your data protection strategy