+
Skip to content

bagustris/nkululeko

 
 

Repository files navigation

Nkululeko

Nkululeko is a project to detect speaker characteristics by machine learning experiments with a high-level interface. The idea is to have a framework (based on e.g. sklearn and torch) that can be used to rapidly and automatically analyse audio data and explore machine learning models based on that data.

Some abilities that Nkululeko provides: combines acoustic features and machine learning models (including feature selection and features concatenation); performs data exploration, selection and visualization the results; finetuning; ensemble learning models; soft labeling (predicting labels with pre-trained model); and inference the model on a test set.

Nkululeko orchestrates data loading, feature extraction, and model training, allowing you to specify your experiment in a configuration file. The framework handles the process from raw data to trained model and evaluation, making it easy to run machine learning experiments without directly coding in Python.

Who is this for?

Nkululeko is for speech processing learners, researchers and ML practitioners focused on speaker characteristics, e.g., emotion, age, gender, or disorder detection.

Installation

Nkululeko requires Python 3.9 or higher with the following build status:

Python 3.10
Python 3.11
Python 3.12
Python 3.13

Create and activate a virtual Python environment and simply install Nkululeko:

# using python venv
python -m venv .env
source .env/bin/activate  # specify OS versions, add a separate line for Windows users 
pip install nkululeko
# using uv in development mode
uv venv --python 3.12
source .venv/bin/activate
uv pip install -r requirements.txt
# or run directly using uv run after cloning
uv run python -m nkululeko.nkululeko --config examples/exp_polish_tree.ini

Optional Dependencies

Nkululeko supports optional dependencies through extras:

# Install with PyTorch support
pip install nkululeko[torch]

# Install with CPU-only PyTorch
pip install nkululeko[torch-cpu]

# Install with TensorFlow support
pip install nkululeko[tensorflow]

# Install all optional dependencies
pip install nkululeko[all]

Manual Installation Options

You can also install dependencies manually:

PyTorch Installation

For CPU-only installation (recommended for most users):

pip install torch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 --index-url https://download.pytorch.org/whl/cpu

For GPU support (cuda 12.6):

pip install torch torchvision torchaudio

Some functionalities require extra packages to be installed, which we didn't include automatically:

  • For spotlight adapter:
    pip install PyYAML  # Install PyYAML first to avoid dependency issues
    pip install nkululeko[spotlight]

Some examples for ini-files (which you use to control nkululeko) are in the examples folder.

Documentation

The documentation, along with extensions of installation, usage, INI file format, and examples, can be found nkululeko.readthedocs.io.

Usage

Basically, you specify your experiment in an "ini" file (e.g. experiment.ini) and then call one of the Nkululeko interfaces to run the experiment like this:

python -m nkululeko.nkululeko --config experiment.ini

A basic configuration looks like this:

[EXP]
root = ./
name = exp_emodb
[DATA]
databases = ['emodb']
emodb = ./emodb/
emodb.split_strategy = speaker_split
target = emotion
labels = ['anger', 'boredom', 'disgust', 'fear']
[FEATS]
type = ['praat']
[MODEL]
type = svm
[EXPL]
model = tree
plot_tree = True

Read the Hello World example for initial usage with Emo-DB dataset.

Here is an overview of the interfaces/modules:

All of them take --config <my_config.ini> as an argument.

  • NEW: Here's a Google colab that runs this example out-of-the-box, and here is the same with Kaggle
  • I made a video to show you how to do this on Windows
  • Set up Python on your computer, version >= 3.8
  • Open a terminal/command line/console window
  • Test python by typing python, python should start with version >3 (NOT 2!). You can leave the Python Interpreter by typing exit()
  • Create a folder on your computer for this example, let's call it nkulu_work
  • Get a copy of the Berlin emodb in audformat and unpack inside the folder you just created (nkulu_work)
  • Make sure the folder is called "emodb" and does contain the database files directly (not box-in-a-box)
  • Also, in the nkulu_work folder:
    • Create a Python environment
      • python -m venv venv
    • Then, activate it:
      • under Linux / mac
        • source venv/bin/activate
      • under Windows
        • venv\Scripts\activate.bat
      • if that worked, you should see a (venv) in front of your prompt
    • Install the required packages in your environment
      • pip install nkululeko
      • Repeat until all error messages vanish (or fix them, or try to ignore them)...
  • Now you should have two folders in your nkulu_work folder:
    • emodb and venv
  • Download a copy of the file exp_emodb.ini to the current working directory (nkulu_work)
  • Run the demo
    • python -m nkululeko.nkululeko --config exp_emodb.ini
  • Find the results in the newly created folder exp_emodb
    • Inspect exp_emodb/images/run_0/emodb_xgb_os_0_000_cnf.png
    • This is the main result of your experiment: a confusion matrix for the emodb emotional categories
  • Inspect and play around with the demo configuration file that defined your experiment, then re-run.
  • There are many ways to experiment with different classifiers and acoustic feature sets, all described here

Features

The framework is targeted at the speech domain and supports experiments where different classifiers are combined with different feature extractors.

  • Classifiers: Naive Bayes, KNN, Tree, XGBoost, SVM, MLP
  • Feature extractors: Praat, Opensmile, openXBOW BoAW, TRILL embeddings, Wav2vec2 embeddings, audModel embeddings, ...
  • Feature scaling
  • Label encoding
  • Binning (continuous to categorical)
  • Online demo interface for trained models
  • Visualization: confusion matrix, feature importance, feature distribution, epoch progression, t-SNE plot, data distribution, bias checking, uncertainty estimation

Here's a rough UML-like sketch of the framework (and here's the real one done with pyreverse). sketch

Currently, the following linear classifiers are implemented (integrated from sklearn):

  • SVM, SVR, XGB, XGR, Tree, Tree_regressor, KNN, KNN_regressor, NaiveBayes, GMM and the following ANNs (artificial neural networks)
  • MLP (multi-layer perceptron), CNN (convolutional neural network)

For visualization, besides confusion matrix, feature importance, feature distribution, t-SNE plot, data distribution (just names a few), Nkululeko can also be used for bias checking, uncertainty estimation, and epoch progression.

Bias checking

In some cases, you might wonder if there's bias in your data. You can try to detect this with automatically estimated speech properties by visualizing the correlation of target labels and predicted labels.

Uncertainty

Nkululeko estimates the uncertainty of model decisions (only for classifiers) with entropy over the class probabilities or logits per sample.

Here's an animation that shows the progress of classification done with nkululeko.

News

There's Felix blog with tutorials below:

License

Nkululeko can be used under the MIT license.

Contributing

Contributions are welcome and encouraged. To learn more about how to contribute to nkululeko, please refer to the Contributing guidelines.

Citation

If you use Nkululeko, please cite the paper:

F. Burkhardt, Johannes Wagner, Hagen Wierstorf, Florian Eyben and Björn Schuller: Nkululeko: A Tool For Rapid Speaker Characteristics Detection, Proc. Proc. LREC, 2022

@inproceedings{Burkhardt:lrec2022,
   title = {Nkululeko: A Tool For Rapid Speaker Characteristics Detection},
   author = {Felix Burkhardt and Johannes Wagner and Hagen Wierstorf and Florian Eyben and Björn Schuller},
   isbn = {9791095546726},
   journal = {2022 Language Resources and Evaluation Conference, LREC 2022},
   keywords = {machine learning,speaker characteristics,tools},
   pages = {1925-1932},
   publisher = {European Language Resources Association (ELRA)},
   year = {2022},
}

About

Machine learning speaker characteristics

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 88.3%
  • Jupyter Notebook 9.2%
  • Other 2.5%
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载