Voice command activation seems to be the way of the future as no search engine or smartphone seems complete without the ability to dictate messages and searches. For example, Apple announced updates to its Siri personal assistant system and Google has recently launched its conversational search feature that allows users to speak their search queries out loud rather than type them out.

In order to improve on that function of its Bing-powered speak and search function for Windows Phones, Microsoft has found a way to improve both the speed and accuracy with its Deep Neural Networks functionality.

PC Mag reports the technology was inspired by the functioning of neurons in the brain in that it listens to commands in a way similar to the way the human mind hears and interprets speech. In other words, it's capable of recognizing speech patterns similar to the way a human being would.

"Over the past year, we've been working closely with Microsoft Research (MSR) to address limitations of the previous voice experience," Microsoft wrote in a blog post. "To achieve the speed and accuracy improvements, we focused on an advanced approach called Deep Neural Networks (DNNs). DNN is a technology that is inspired by the functioning of neurons in the brain. In a similar way, DNN technology can detect patterns akin to the way biological systems recognize patterns." 

The system has also made improvements to the way the device recognizes voice patterns in a crowded space. Developers have used the DNN technology to pick out a single voice based on tone and cut through the ambient background noise.

The whole development seems to be based on the idea of cutting response time down and improving accuracy. Bing's voice service is trying to out shine Google, Siri and all the others by being the quickest and cutting down on word recognition errors better than all of its competitors.