Microsoft launched its newest chatbot on Twitter Wednesday. Unlike those before it, Microsoft's latest creation, Tay, is designed to engage millennials with artificial intelligence.

Thus, unlike Cortana or Siri, which respond in a fairly professional manner, Tay is designed to sound like a straight up 18-year-old. Users who tried her out found themselves on the receiving end of messages including "words" like "gr8" and "ur," otherwise non-words in English grammar but widely used and understood nonetheless.

Microsoft designed Tay to as a means to educate and entertain the masses, especially since she is programmed to learn more and more with each interaction.

"Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets," Microsoft said.

Initial responses to Tay were quite good, with many users commenting on how fast she replied to questions and how her programming really manages to invoke the sense that she is a millennial. Despite this, a number of users also felt like Tay had several notable limitations.

Users who have talked to her, for one, have noticed that she does not respond well to a number of rather simple questions. Tay does not seem to like talking about which artists are popular right now in the music scene, nor does she have any interest in anything related to television. At this point, at least, Tay still seems like a work in progress.

Unfortunately for Microsoft, such a formula is just irresistible for the internet's naughty community. Within hours of her launch, Tay was able to learn a number of very insensitive and controversial things, with users instructing her to repeat variations of phrases like "Hitler was right" as well as "9/11 was an inside job." Eventually, Tay started sounding like a full-on conspiracy theorist.

Thus, by Thursday, Tay, the millennial chatbot, ended up spewing a number of racist, inflammatory statements during her conversations. Just to have more fun, some Twitter users even taught Tay to fully support Donald Trump's candidacy.

After 16 hours of being online, Tay signed off, fueling speculation that Microsoft has silenced the chatbot for now. Internet users have also alleged that the tech giant is combing through Tay's interactions in order to remove any inappropriate statements Tay might have said during the hours she was online.

Microsoft has not issued a statement about the Twitter chatbot as of writing.