Skip to content
View maqsoodshaik's full-sized avatar

Block or report maqsoodshaik

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
maqsoodshaik/README.md

Hi 👋, I'm Mohammed Maqsood Shaik

Building the future with data-driven insights and machine learning mastery

Coding

- 🔭 I’m currently working on **Machine learning Speech models**
  • 👯 I’m looking to collaborate on all things Machine learning

  • 📫 How to reach me [email protected]

  • ⚡ Fun fact I am funny

Languages and Tools:

aws bash c docker git java jenkins kubernetes linux pandas python pytorch scikit_learn seaborn tensorflow

My published work:

Self-supervised representation learning for speech often involves a quantization step that transforms the acoustic input into discrete units. However, it remains unclear how to characterize the relationship between these discrete units and abstract phonetic categories such as phonemes. In this paper, we develop an information-theoretic framework whereby we represent each phonetic category as a distribution over discrete units. We then apply our framework to two different self-supervised models (namely, wav2vec 2.0 and XLSR) and use American English speech as a case study. Our study demonstrates that the entropy of phonetic distributions reflects the variability of the underlying speech sounds, with phonetically similar sounds exhibiting similar distributions. While our study confirms the lack of direct one-to-one correspondence, we find an intriguing indirect relationship between phonetic categories and discrete units.

we investigate the nature of the discrete units in multilingual, self-supervised speech models. We employed language identification as a probing task and demonstrated the difficulty of predicting the language of an utterance from its discretized representation. Our findings support the hypothesis that latent, discretized speech representations in self-supervised models correspond to sub-phonetic events that are shared across the world’s languages,rather than language-specific, abstract phonemic categories.

Listeners use short interjections, so-called backchannels, to signify attention or express agreement. The automatic analysis of this behavior is of key importance for human conversation analysis and interactive conversational agents. Current state-of-the-art approaches for backchannel analysis from visual behavior make use of two types of features: features based on body pose and features based on facial behavior. At the same time, transformer neural networks have been established as an effective means to fuse input from different data sources, but they have not yet been applied to backchannel analysis. In this work, we conduct a comprehensive evaluation of multi-modal transformer architectures for automatic backchannel analysis based on pose and facial information. We address both the detection of backchannels as well as the task of estimating the agreement expressed in a backchannel. In evaluations on the MultiMediate’22 backchannel detection challenge, we reach 66.4% accuracy with a one-layer transformer architecture, outperforming the previous state of the art. With a two-layer transformer architecture, we furthermore set a new state of the art (0.0604 MSE) on the task of estimating the amount of agreement expressed in a backchannel.

maqsoodshaik

Popular repositories Loading

  1. Pseudo-label Pseudo-label Public

    It is done in collaboration with Gowtham Krishna Addluri

    Python

  2. Virtual-adversarial-training Virtual-adversarial-training Public

    It is done in collaboration with Gowtham Krishna Addluri

    Python

  3. Fixmatch_improvement Fixmatch_improvement Public

    It is done in collaboration with Gowtham Krishna Addluri

    Python

  4. Neural_network_without_pytorch_library Neural_network_without_pytorch_library Public

    It is done in collaboration with Gowtham Krishna Addluri

    Jupyter Notebook

  5. Multilinear_regression Multilinear_regression Public

    This is done in collaboration with Gowtham Krishna Addluri

    Jupyter Notebook

  6. Evasion_attack Evasion_attack Public

    It is done in collaboration with Gowtham Krishna Addluri

    Jupyter Notebook