How to Make a Speech Recognition System

You might be working on a product and think speech recognition would be an awesome feature to build in. Or, you just feel like experimenting with your own Ironman workstation. Whichever it is, today I’m going to look at the tools you can use to make your own voice recognition software and discuss some of the different ways you can go about it.

Speech and Computers

how-to-turn-google-chrome-into-jarvis-from-iron-man-e1424801616438-min

As humans, we communicate with each other through speech quickly and easily. The thought of having to write down or type every sentence and thought in a casual conversation seems slow in comparison.

So why do we communicate with computers this way? Well, computers have been able to understand speech for a long time. But, they haven’t been that great at it. Until recently, speech recognition systems had topped out at about 80% accuracy. That’s ok, but correcting errors in 20% of the words you say gets annoying very quickly.

That’s all changing now. Modern speech recognition systems can now understand speech extremely accurately, and they even talk back to you in a way you can understand.

So What is a Speech Recognition System?

Simply put, it’s any system that takes in audio and attempts to recognize and understand speech within it. These days, mobile looks like it will be the platform that speech recognition systems work best on.

Here’s a list of some mobile apps that use speech recognition:

  • Google Mobile Apps – on Android, BlackBerry, and iOS – Free
  • Bing – on Android and iOS – Free
  • Siri Assistant – for iOS – Free
  • DriveSafe.ly – for Android, BlackBerry, iOS – $13.95 per year
  • Dragon Downloadable Apps – on Android, BlackBerry, iOS
  • Jibbigo Voice Translation – on Android, iOS – Free

Today I’ll be looking at the tools these apps use to implement their speech functionality.

Why is Recognizing Speech So Difficult?

bot-min

Like many problems in computer science, recognizing speech is more difficult than it seems. Something that seems trivial to you can take decades of research to automate with software.

Download Our Project Specification Template

Some of the factors that make it so difficult are

  • The complexity of spoken language – In English, many words have multiple meanings depending on the context – for example “red” and “read” sound exactly the same but have completely different meanings.
  • People talk fast – When we speak, we don’t break our sentences up into individual words – we kind of just blurt it all out in one long string of sounds with few breaks. This makes it difficult to determine where a word ends and the next one begins.
  • No two people speak in the same way – It’s no good to have a system that needs to be reprogrammed for every individual. A system needs to be able to hear a new voice and understand it immediately.
  • Background noise – Differentiating the speech from the background noise is very difficult. This is especially true if the background noise is also speech (say at a party).

How Does Voice Recognition Work?

iron-man-tony-stark-jarvis-min

Many institutions, scientists, researchers, and companies have invested in speech recognition research. As a result, there are a few different approaches that work to varying degrees. The main four methods are:

  1. Simple audio pattern matching
  2. More complex pattern and feature analysis
  3. Statistical analysis and modeling
  4. Artificial neural networks

blog-banner-download-project-specs-2

1. Simple Word Pattern Matching

This method is the simplest way to build a voice to text converter, and it works quite well in some limited cases. It involves recognizing whole words based on their audio signature. You’ve probably used one of these systems before. When you call up a company and a machine asks you for your name or number, they are probably using this type of speech recognition.

The first thing a speech recognition system needs to do is convert the audio signal into a form a computer can understand. This is usually a spectrogram. It’s a three-dimensional graph displaying time on the x-axis, frequency on the y-axis, and intensity is represented as color. Here’s an example of a spectrogram of some human speech.

spectogram

A pattern matching system will have a limited number of saved words it can understand. It knows what the spectrogram graph of each of these words looks like, and uses it to determine which word you said. This works well with very small vocabularies such as the number 0-9, but not much more.

2. Pattern and Feature Analysis

Technically, you could extend the above system to work with all words. However, a typical person has a vocabulary of tens of thousands of words, so this would be a hugely inefficient way of doing things.

A better way is to learn the building blocks that make up words and listen for those. You can then put these together to build and understand whole words and sentences. This is how feature analysis works.

In reality, this still isn’t very accurate. Just because a computer can understand the sounds that make up words, it doesn’t mean it can understand what you are saying.

3. Statistical Analysis and Modelling

A system that can listen to you speak and understand what words you are saying will need some understanding of how a language works. This is called a language model.

By mathematically analyzing a language, you can find patterns. Some words are very likely to be followed by other words, and other are rarely spoken in the same sentence. A phrase like “opened the…” is likely to be followed by words like “door”. If your software has access to a statistical model containing all this data, it can make much better guesses about what words were said in an audio clip.

This is the method most of the successful speech recognition systems have used for the past few decades. But, these methods reached a limit with their accuracy. Obtaining very high accuracy requires a more advanced voice recognition technology.

4. Artificial Neural Networks (ANNs)

neural_network_example-min

Read How We Helped a Marketing Company to Build a Back-Office Custom Ads Dashboard

Artificial neural networks are an attempt to get computers to work more like the human brain. Your brain doesn’t store specific encoded instructions, it has vast networks of neurons. These alter their connections to each other as new information past through them.

Speech recognition using this machine learning is taking off at companies like Google and Microsoft, who have huge databases of information to train these networks.

What’s the Best Approach?

The best way of doing this depends on your resources and what you want to achieve. Coding everything from scratch isn’t required, as there are so many great tools and libraries available. Let’s take a look at some of the tools you can use to build your own system.

Commercial APIs

google-speech-min

Many of the big cloud providers have APIs you can use for voice recognition. All you need to do is query the API with audio in your code, and it will return the text. Some of the main ones include:

This is an easy and powerful method, as you’ll essentially have access to all the resources and speech recognition algorithms of these big companies.

Of course, the downside is that most of them aren’t free. And, you can’t customize them very much, as all the processing is done on a remote server. For a free, custom voice recognition system, you’ll need to use a different set of tools.

Open Source Voice Recognition Libraries

To build your custom solution, there are some really great libraries you can use. They are fast, accurate, and free. Here are some of the best available – I’ve chosen a few that use different techniques and programming languages.

CMU Sphinx

cmus-min

CMU Sphinx is a group of recognition systems developed at Carnegie Mellon University – each designed for different purposes. It is written in Java, but there are bindings for many languages. This means you can use the libraries and voice recognition methods even if you want to program in C# or Python. There are some great components you need to develop a voice recognition system.

For an awesome example of an application built using CMU Sphinx, check out the Jasper Project on GitHub.

KALDI

kaldi-min

Kaldi, released in 2011 is a relatively new toolkit that’s gained a reputation for being easy to use. It uses the C++ programming language.

HTK

HTK, also called the Hidden Markov Model Toolkit, is made for the statistical analysis modeling techniques we discussed earlier. It’s owned by Microsoft, but they are happy for you to use and change the source code. It uses the C programming language.

Where to Get Started

If you’re new to building this kind of system, I would go with something based on Python that uses the CMU Sphinx library. Check out this quick tutorial that sets up a very basic system in just 29 lines of Python code.

Finding Developers That Can Help

Needless to say, speech recognition programming is an art form, and putting all this together is a heck of a job. To create something that really works, you’ll need to be a pro yourself or get some professional help. Software teams at DevTeamSpace build these kinds of systems all the time and can certainly help you get your app understanding your users fast.

Conclusion

Speech recognition tech is finally good enough to be useful. Pair that with the rise of mobile devices (and their annoyingly small keyboards), and it’s easy to see it taking off in a big way. To keep up with your competition and make your customers happy, why not try to build a speech recognition system into your products?

Download Our Project Specification Template

Sam Palmer

Sam Palmer

Web Developer and Tech Writer
Sam Palmer

Latest posts by Sam Palmer (see all)