• Deep-Learning Training

    Python for Machine and Deep Learning

    Read more
  • Trity Scilab IP

    Scilab Image Processing and Computer Vision Training

    Read more
  • Trity Products


    with Open Source products...Read more
  • Trity Courses


    with our training...Read more
  • Trity Serices


    with Scilab...Read more


Scilab for Speech Recognition

Speech Recognition and The Hidden Markov Model with Scilab
The extension of the signal processing allows us to perform speech recognition with Scilab. It allows fast development due to the simple programming syntax of Scilab.
enquire icon

 “Scilab, the Open source software for numerical computation and Visualization.”

Course Synopsiscrackingimage

Speech recognition is the process by which a computer or machine identifies spoken words. The speech recognition system, in general, comprises speech segmentation, feature extraction, and feature matching with a trained library of stored features. Speech segmentation is easily accomplished by segmenting at points where the power of the sampled signal goes to zero. Feature extraction may be done in a variety of ways, depending on the features one chooses to extract. Typically, these are the coefficients that collectively represent the short-time spectrum of the speech signal, such as the mel-frequency cepstral coefficients (MFCCs) or the linear prediction cepstral coefficients (LPCCs). Feature matching is traditionally implemented via dynamic time warping, which provides a means for the temporal alignment of two speech signals which may vary in time or speed. Modern speech recognition systems are, however, based on hidden Markov models (HMMs), developed by Leonard E. Baum and his coworkers in the late 1960s. The HMM is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (or hidden) states. As speech signals are short-time stationary processes, modeling speech signals as HMMs is feasible and offers great advantages over the predecessor.smoothing

This course is conducted in a workshop-like manner, with a balance mix of theory and hands-on coding and simulation in Scilab. Extensive exercises are provided throughout the course to cover every angle of algorithm design and implementation using Scilab.

Course Objectives

This two-day course provides a practical introduction to speech recognition and the hidden Markov model. As such, there will be a series of hands-on exercises which are generally aimed to help translate the theoretical models to practical applications.


Who Must Attendtimefreqanaly

Scientists, mathematicians, engineers and programmers at all levels who work with or need to learn about speech recognition and/or the hidden Markov model. No background in either of these topics are, however, assumed. The detailed course material and many source code listings will be invaluable for both learning and reference.


A basic knowledge of probability theory, signal processing and Scilab programming is necessary.

What you will learnwindowing

Basic theoretical concepts and principles of speech recognition and the hidden Markov model

Scilab implementation of the related algorithms






Course Outline

The course begins with an overview of the speech recognition problem, and a review of some common speech analysis models. The dynamic time warping algorithm and the hidden Markov model are then introduced in turn, with the basic principles behind these methods discussed both through theory and practice. Programming examples are provided at the end of each section to help reconcile theory with actual application.

  •  Introduction
  • The speech signal
  • The signal classification problem
  • Speech analysis models
    • Filter banks, Critical band scales, Linear prediction, Autoregressive models, Homomorphic systems, Cepstral transformation, Cepstral coefficients
  • Pattern recognition
    • Distance and distortion measures, Time alignment, Dynamic time warping, Dynamic programming
  •  Hidden Markov models
    • States and observations, State transition probabilities and observation probabilities, The three problems
  • The evaluation problem
    • Forward and backward variables
  • The decoding problem
    • Viterbi algorithm 
  • The learning problem
    • Maximum likelihood estimation, Expectation-maximization algorithm, Discrete observation symbols, Continuous observation densities
  • Implementation issues
    • Left-right hidden Markov models, The initial estimates

 enquire icon

© 2010-2018 Trity Technologies Sdn Bhd. All Rights Reserved.