Data processing
Datasets
- class audiotools.data.datasets.AudioDataset(loaders: ~typing.Union[~audiotools.data.datasets.AudioLoader, ~typing.List[~audiotools.data.datasets.AudioLoader], ~typing.Dict[str, ~audiotools.data.datasets.AudioLoader]], sample_rate: int, n_examples: int = 1000, duration: float = 0.5, offset: ~typing.Optional[float] = None, loudness_cutoff: float = -40, num_channels: int = 1, transform: ~typing.Optional[~typing.Callable] = None, aligned: bool = False, shuffle_loaders: bool = False, matcher: ~typing.Callable = <function default_matcher>, without_replacement: bool = True)[source]
Bases:
object
Loads audio from multiple loaders (with associated transforms) for a specified number of samples. Excerpts are drawn randomly of the specified duration, above a specified loudness threshold and are resampled on the fly to the desired sample rate (if it is different from the audio source sample rate).
This takes either a single AudioLoader object, a dictionary of AudioLoader objects, or a dictionary of AudioLoader objects. Each AudioLoader is called by the dataset, and the result is placed in the output dictionary. A transform can also be specified for the entire dataset, rather than for each specific loader. This transform can be applied to the output of all the loaders if desired.
AudioLoader objects can be specified as aligned, which means the loaders correspond to multitrack audio (e.g. a vocals, bass, drums, and other loader for multitrack music mixtures).
- Parameters
loaders (Union[AudioLoader, List[AudioLoader], Dict[str, AudioLoader]]) – AudioLoaders to sample audio from.
sample_rate (int) – Desired sample rate.
n_examples (int, optional) – Number of examples (length of dataset), by default 1000
duration (float, optional) – Duration of audio samples, by default 0.5
loudness_cutoff (float, optional) – Loudness cutoff threshold for audio samples, by default -40
num_channels (int, optional) – Number of channels in output audio, by default 1
transform (Callable, optional) – Transform to instantiate alongside each dataset item, by default None
aligned (bool, optional) – Whether the loaders should be sampled in an aligned manner (e.g. same offset, duration, and matched file name), by default False
shuffle_loaders (bool, optional) – Whether to shuffle the loaders before sampling from them, by default False
matcher (Callable) – How to match files from adjacent audio lists (e.g. for a multitrack audio loader), by default uses the parent directory of each file.
without_replacement (bool) – Whether to choose files with or without replacement, by default True.
Examples
>>> from audiotools.data.datasets import AudioLoader >>> from audiotools.data.datasets import AudioDataset >>> from audiotools import transforms as tfm >>> import numpy as np >>> >>> loaders = [ >>> AudioLoader( >>> sources=[f"tests/audio/spk"], >>> transform=tfm.Equalizer(), >>> ext=["wav"], >>> ) >>> for i in range(5) >>> ] >>> >>> dataset = AudioDataset( >>> loaders = loaders, >>> sample_rate = 44100, >>> duration = 1.0, >>> transform = tfm.RescaleAudio(), >>> ) >>> >>> item = dataset[np.random.randint(len(dataset))] >>> >>> for i in range(len(loaders)): >>> item[i]["signal"] = loaders[i].transform( >>> item[i]["signal"], **item[i]["transform_args"] >>> ) >>> item[i]["signal"].widget(i) >>> >>> mix = sum([item[i]["signal"] for i in range(len(loaders))]) >>> mix = dataset.transform(mix, **item["transform_args"]) >>> mix.widget("mix")
Below is an example of how one could load MUSDB multitrack data:
>>> import audiotools as at >>> from pathlib import Path >>> from audiotools import transforms as tfm >>> import numpy as np >>> import torch >>> >>> def build_dataset( >>> sample_rate: int = 44100, >>> duration: float = 5.0, >>> musdb_path: str = "~/.data/musdb/", >>> ): >>> musdb_path = Path(musdb_path).expanduser() >>> loaders = { >>> src: at.datasets.AudioLoader( >>> sources=[musdb_path], >>> transform=tfm.Compose( >>> tfm.VolumeNorm(("uniform", -20, -10)), >>> tfm.Silence(prob=0.1), >>> ), >>> ext=[f"{src}.wav"], >>> ) >>> for src in ["vocals", "bass", "drums", "other"] >>> } >>> >>> dataset = at.datasets.AudioDataset( >>> loaders=loaders, >>> sample_rate=sample_rate, >>> duration=duration, >>> num_channels=1, >>> aligned=True, >>> transform=tfm.RescaleAudio(), >>> shuffle_loaders=True, >>> ) >>> return dataset, list(loaders.keys()) >>> >>> train_data, sources = build_dataset() >>> dataloader = torch.utils.data.DataLoader( >>> train_data, >>> batch_size=16, >>> num_workers=0, >>> collate_fn=train_data.collate, >>> ) >>> batch = next(iter(dataloader)) >>> >>> for k in sources: >>> src = batch[k] >>> src["transformed"] = train_data.loaders[k].transform( >>> src["signal"].clone(), **src["transform_args"] >>> ) >>> >>> mixture = sum(batch[k]["transformed"] for k in sources) >>> mixture = train_data.transform(mixture, **batch["transform_args"]) >>> >>> # Say a model takes the mix and gives back (n_batch, n_src, n_time). >>> # Construct the targets: >>> targets = at.AudioSignal.batch([batch[k]["transformed"] for k in sources], dim=1)
Similarly, here’s example code for loading Slakh data:
>>> import audiotools as at >>> from pathlib import Path >>> from audiotools import transforms as tfm >>> import numpy as np >>> import torch >>> import glob >>> >>> def build_dataset( >>> sample_rate: int = 16000, >>> duration: float = 10.0, >>> slakh_path: str = "~/.data/slakh/", >>> ): >>> slakh_path = Path(slakh_path).expanduser() >>> >>> # Find the max number of sources in Slakh >>> src_names = [x.name for x in list(slakh_path.glob("**/*.wav")) if "S" in str(x.name)] >>> n_sources = len(list(set(src_names))) >>> >>> loaders = { >>> f"S{i:02d}": at.datasets.AudioLoader( >>> sources=[slakh_path], >>> transform=tfm.Compose( >>> tfm.VolumeNorm(("uniform", -20, -10)), >>> tfm.Silence(prob=0.1), >>> ), >>> ext=[f"S{i:02d}.wav"], >>> ) >>> for i in range(n_sources) >>> } >>> dataset = at.datasets.AudioDataset( >>> loaders=loaders, >>> sample_rate=sample_rate, >>> duration=duration, >>> num_channels=1, >>> aligned=True, >>> transform=tfm.RescaleAudio(), >>> shuffle_loaders=False, >>> ) >>> >>> return dataset, list(loaders.keys()) >>> >>> train_data, sources = build_dataset() >>> dataloader = torch.utils.data.DataLoader( >>> train_data, >>> batch_size=16, >>> num_workers=0, >>> collate_fn=train_data.collate, >>> ) >>> batch = next(iter(dataloader)) >>> >>> for k in sources: >>> src = batch[k] >>> src["transformed"] = train_data.loaders[k].transform( >>> src["signal"].clone(), **src["transform_args"] >>> ) >>> >>> mixture = sum(batch[k]["transformed"] for k in sources) >>> mixture = train_data.transform(mixture, **batch["transform_args"])
- static collate(list_of_dicts: Union[list, dict], n_splits: Optional[int] = None)[source]
Collates items drawn from this dataset. Uses
audiotools.core.util.collate()
.- Parameters
list_of_dicts (Union[list, dict]) – Data drawn from each item.
n_splits (int) – Number of splits to make when creating the batches (split into sub-batches). Useful for things like gradient accumulation.
- Returns
Dictionary of batched data.
- Return type
dict
- class audiotools.data.datasets.AudioLoader(sources: Optional[List[str]] = None, weights: Optional[List[float]] = None, transform: Optional[Callable] = None, relative_path: str = '', ext: List[str] = ['.wav', '.flac', '.mp3', '.mp4'], shuffle: bool = True, shuffle_state: int = 0)[source]
Bases:
object
Loads audio endlessly from a list of audio sources containing paths to audio files. Audio sources can be folders full of audio files (which are found via file extension) or by providing a CSV file which contains paths to audio files.
- Parameters
sources (List[str], optional) – Sources containing folders, or CSVs with paths to audio files, by default None
weights (List[float], optional) – Weights to sample audio files from each source, by default None
relative_path (str, optional) – Path audio should be loaded relative to, by default “”
transform (Callable, optional) – Transform to instantiate alongside audio sample, by default None
ext (List[str]) – List of extensions to find audio within each source by. Can also be a file name (e.g. “vocals.wav”). by default
['.wav', '.flac', '.mp3', '.mp4']
.shuffle (bool) – Whether to shuffle the files within the dataloader. Defaults to True.
shuffle_state (int) – State to use to seed the shuffle of the files.
- class audiotools.data.datasets.ConcatDataset(datasets: list)[source]
Bases:
AudioDataset
- class audiotools.data.datasets.ResumableDistributedSampler(dataset, start_idx: Optional[int] = None, **kwargs)[source]
Bases:
DistributedSampler
Distributed sampler that can be resumed from a given start index.
- class audiotools.data.datasets.ResumableSequentialSampler(dataset, start_idx: Optional[int] = None, **kwargs)[source]
Bases:
SequentialSampler
Sequential sampler that can be resumed from a given start index.
- data_source: Sized
Preprocessing data
- audiotools.data.preprocess.create_csv(audio_files: list, output_csv: Path, loudness: bool = False, data_path: Optional[str] = None)[source]
Converts a folder of audio files to a CSV file. If
loudness = True
, the output of this function will create a CSV file that looks something like:path
loudness
daps/produced/f1_script1_produced.wav
-16.299999237060547
daps/produced/f1_script2_produced.wav
-16.600000381469727
daps/produced/f1_script3_produced.wav
-17.299999237060547
daps/produced/f1_script4_produced.wav
-16.100000381469727
daps/produced/f1_script5_produced.wav
-16.700000762939453
daps/produced/f3_script1_produced.wav
-16.5
Note
The paths above are written relative to the
data_path
argument which defaults to the environment variablePATH_TO_DATA
if it isn’t passed to this function, and defaults to the empty string if that environment variable is not set.You can produce a CSV file from a directory of audio files via:
>>> import audiotools >>> directory = ... >>> audio_files = audiotools.util.find_audio(directory) >>> output_path = "train.csv" >>> audiotools.data.preprocess.create_csv( >>> audio_files, output_csv, loudness=True >>> )
Note that you can create empty rows in the CSV file by passing an empty string or None in the
audio_files
list. This is useful if you want to sync multiple CSV files in a multitrack setting. The loudness of these empty rows will be set to -inf.- Parameters
audio_files (list) – List of audio files.
output_csv (Path) – Output CSV, with each row containing the relative path of every file to
data_path
, if specified (defaults to None).loudness (bool) – Compute loudness of entire file and store alongside path.
Transforms for data augmentation
- class audiotools.data.transforms.BackgroundNoise(snr: tuple = ('uniform', 10.0, 30.0), sources: Optional[List[str]] = None, weights: Optional[List[float]] = None, eq_amount: tuple = ('const', 1.0), n_bands: int = 3, name: Optional[str] = None, prob: float = 1.0, loudness_cutoff: Optional[float] = None)[source]
Bases:
BaseTransform
Adds background noise from audio specified by a set of CSV files. A valid CSV file looks like, and is typically generated by
audiotools.data.preprocess.create_csv()
:path
room_tone/m6_script2_clean.wav
room_tone/m6_script2_cleanraw.wav
room_tone/m6_script2_ipad_balcony1.wav
room_tone/m6_script2_ipad_bedroom1.wav
room_tone/m6_script2_ipad_confroom1.wav
room_tone/m6_script2_ipad_confroom2.wav
room_tone/m6_script2_ipad_livingroom1.wav
room_tone/m6_script2_ipad_office1.wav
Note
All paths are relative to an environment variable called
PATH_TO_DATA
, so that CSV files are portable across machines where data may be located in different places.This transform calls
audiotools.core.effects.EffectMixin.mix()
andaudiotools.core.effects.EffectMixin.equalizer()
under the hood.- Parameters
snr (tuple, optional) – Signal-to-noise ratio, by default (“uniform”, 10.0, 30.0)
sources (List[str], optional) – Sources containing folders, or CSVs with paths to audio files, by default None
weights (List[float], optional) – Weights to sample audio files from each source, by default None
eq_amount (tuple, optional) – Amount of equalization to apply, by default (“const”, 1.0)
n_bands (int, optional) – Number of bands in equalizer, by default 3
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
loudness_cutoff (float, optional) – Loudness cutoff when loading from audio files, by default None
- class audiotools.data.transforms.BaseTransform(keys: list = [], name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
object
This is the base class for all transforms that are implemented in this library. Transforms have two main operations:
transform
andinstantiate
.instantiate
sets the parameters randomly from distribution tuples for each parameter. For example, for theBackgroundNoise
transform, the signal-to-noise ratio (snr
) is chosen randomly by instantiate. By default, it chosen uniformly between 10.0 and 30.0 (the tuple is set to("uniform", 10.0, 30.0)
).transform
applies the transform using the instantiated parameters. A simple example is as follows:>>> seed = 0 >>> signal = ... >>> transform = transforms.NoiseFloor(db = ("uniform", -50.0, -30.0)) >>> kwargs = transform.instantiate() >>> output = transform(signal.clone(), **kwargs)
By breaking apart the instantiation of parameters from the actual audio processing of the transform, we can make things more reproducible, while also applying the transform on batches of data efficiently on GPU, rather than on individual audio samples.
Note
We call
signal.clone()
for the input to thetransform
function because signals are modified in-place! If you don’t clone the signal, you will lose the original data.- Parameters
keys (list, optional) – Keys that the transform looks for when calling
self.transform
, by default []. In general this is set automatically, and you won’t need to manipulate this argument.name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
Examples
>>> seed = 0 >>> >>> audio_path = "tests/audio/spk/f10_script4_produced.wav" >>> signal = AudioSignal(audio_path, offset=10, duration=2) >>> transform = tfm.Compose( >>> [ >>> tfm.RoomImpulseResponse(sources=["tests/audio/irs.csv"]), >>> tfm.BackgroundNoise(sources=["tests/audio/noises.csv"]), >>> ], >>> ) >>> >>> kwargs = transform.instantiate(seed, signal) >>> output = transform(signal, **kwargs)
- static apply_mask(batch: dict, mask: Tensor)[source]
Applies a mask to the batch.
- Parameters
batch (dict) – Batch whose values will be masked in the
transform
pass.mask (torch.Tensor) – Mask to apply to batch.
- Returns
A dictionary that contains values only where
mask = True
.- Return type
dict
- batch_instantiate(states: Optional[list] = None, signal: Optional[AudioSignal] = None)[source]
Instantiates arguments for every item in a batch, given a list of states. Each state in the list corresponds to one item in the batch.
- Parameters
states (list, optional) – List of states, by default None
signal (AudioSignal, optional) – AudioSignal to pass to the
self.instantiate
section if it is needed for this transform, by default None
- Returns
Collated dictionary of arguments.
- Return type
dict
Examples
>>> batch_size = 4 >>> signal = AudioSignal(audio_path, offset=10, duration=2) >>> signal_batch = AudioSignal.batch([signal.clone() for _ in range(batch_size)]) >>> >>> states = [seed + idx for idx in list(range(batch_size))] >>> kwargs = transform.batch_instantiate(states, signal_batch) >>> batch_output = transform(signal_batch, **kwargs)
- instantiate(state: Optional[RandomState] = None, signal: Optional[AudioSignal] = None)[source]
Instantiates parameters for the transform.
- Parameters
state (RandomState, optional) – _description_, by default None
signal (AudioSignal, optional) – _description_, by default None
- Returns
Dictionary containing instantiated arguments for every keyword argument to
self._transform
.- Return type
dict
Examples
>>> for seed in range(10): >>> kwargs = transform.instantiate(seed, signal) >>> output = transform(signal.clone(), **kwargs)
- transform(signal: AudioSignal, **kwargs)[source]
Apply the transform to the audio signal, with given keyword arguments.
- Parameters
signal (AudioSignal) – Signal that will be modified by the transforms in-place.
kwargs (dict) – Keyword arguments to the specific transforms
self._transform
function.
- Returns
Transformed AudioSignal.
- Return type
Examples
>>> for seed in range(10): >>> kwargs = transform.instantiate(seed, signal) >>> output = transform(signal.clone(), **kwargs)
- class audiotools.data.transforms.Choose(*transforms: list, weights: Optional[list] = None, name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
Compose
Choose logic is the same as
audiotools.data.transforms.Compose()
, but instead of applying all the transforms in sequence, it applies just a single transform, which is chosen for each item in the batch.- Parameters
*transforms (list) – List of transforms to apply
weights (list) – Probability of choosing any specific transform.
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
Examples
>>> transforms.Choose(tfm.LowPass(), tfm.HighPass())
- class audiotools.data.transforms.ClippingDistortion(perc: tuple = ('uniform', 0.0, 0.1), name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
BaseTransform
Adds clipping distortion to signal. Corresponds to
audiotools.core.effects.EffectMixin.clip_distortion()
.- Parameters
perc (tuple, optional) – Clipping percentile. Values are between 0.0 to 1.0. Typical values are 0.1 or below, by default (“uniform”, 0.0, 0.1)
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.Compose(*transforms: list, name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
BaseTransform
Compose applies transforms in sequence, one after the other. The transforms are passed in as positional arguments or as a list like so:
>>> transform = tfm.Compose( >>> [ >>> tfm.RoomImpulseResponse(sources=["tests/audio/irs.csv"]), >>> tfm.BackgroundNoise(sources=["tests/audio/noises.csv"]), >>> ], >>> )
This will convolve the signal with a room impulse response, and then add background noise to the signal. Instantiate instantiates all the parameters for every transform in the transform list so the interface for using the Compose transform is the same as everything else:
>>> kwargs = transform.instantiate() >>> output = transform(signal.clone(), **kwargs)
Under the hood, the transform maps each transform to a unique name under the hood of the form
{position}.{name}
, whereposition
is the index of the transform in the list.Compose
can nest within otherCompose
transforms, like so:>>> preprocess = transforms.Compose( >>> tfm.GlobalVolumeNorm(), >>> tfm.CrossTalk(), >>> name="preprocess", >>> ) >>> augment = transforms.Compose( >>> tfm.RoomImpulseResponse(), >>> tfm.BackgroundNoise(), >>> name="augment", >>> ) >>> postprocess = transforms.Compose( >>> tfm.VolumeChange(), >>> tfm.RescaleAudio(), >>> tfm.ShiftPhase(), >>> name="postprocess", >>> ) >>> transform = transforms.Compose(preprocess, augment, postprocess),
This defines 3 composed transforms, and then composes them in sequence with one another.
- Parameters
*transforms (list) – List of transforms to apply
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- filter(*names: list)[source]
This can be used to skip transforms entirely when applying the sequence of transforms to a signal. For example, take the following transforms with the names
preprocess, augment, postprocess
.>>> preprocess = transforms.Compose( >>> tfm.GlobalVolumeNorm(), >>> tfm.CrossTalk(), >>> name="preprocess", >>> ) >>> augment = transforms.Compose( >>> tfm.RoomImpulseResponse(), >>> tfm.BackgroundNoise(), >>> name="augment", >>> ) >>> postprocess = transforms.Compose( >>> tfm.VolumeChange(), >>> tfm.RescaleAudio(), >>> tfm.ShiftPhase(), >>> name="postprocess", >>> ) >>> transform = transforms.Compose(preprocess, augment, postprocess)
If we wanted to apply all 3 to a signal, we do:
>>> kwargs = transform.instantiate() >>> output = transform(signal.clone(), **kwargs)
But if we only wanted to apply the
preprocess
andpostprocess
transforms to the signal, we do:>>> with transform_fn.filter("preprocess", "postprocess"): >>> output = transform(signal.clone(), **kwargs)
- Parameters
*names (list) – List of transforms, identified by name, to apply to signal.
- class audiotools.data.transforms.CorruptPhase(scale: tuple = ('uniform', 0, 3.141592653589793), name: Optional[str] = None, prob: float = 1)[source]
Bases:
SpectralTransform
Corrupts the phase of the audio.
Uses
audiotools.core.dsp.DSPMixin.corrupt_phase()
.- Parameters
scale (tuple, optional) – How much to corrupt phase by, by default (“uniform”, 0, np.pi)
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.CrossTalk(snr: tuple = ('uniform', 0.0, 10.0), sources: Optional[List[str]] = None, weights: Optional[List[float]] = None, name: Optional[str] = None, prob: float = 1.0, loudness_cutoff: float = -40)[source]
Bases:
BaseTransform
Adds crosstalk between speakers, whose audio is drawn from a CSV file that was produced via
audiotools.data.preprocess.create_csv()
.This transform calls
audiotools.core.effects.EffectMixin.mix()
under the hood.- Parameters
snr (tuple, optional) – How loud cross-talk speaker is relative to original signal in dB, by default (“uniform”, 0.0, 10.0)
sources (List[str], optional) – Sources containing folders, or CSVs with paths to audio files, by default None
weights (List[float], optional) – Weights to sample audio files from each source, by default None
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
loudness_cutoff (float, optional) – Loudness cutoff when loading from audio files, by default -40
- class audiotools.data.transforms.Equalizer(eq_amount: tuple = ('const', 1.0), n_bands: int = 6, name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
BaseTransform
Applies an equalization curve to the audio signal. Corresponds to
audiotools.core.effects.EffectMixin.equalizer()
.- Parameters
eq_amount (tuple, optional) – The maximum dB cut to apply to the audio in any band, by default (“const”, 1.0 dB)
n_bands (int, optional) – Number of bands in EQ, by default 6
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.FrequencyMask(f_center: tuple = ('uniform', 0.0, 1.0), f_width: tuple = ('const', 0.1), name: Optional[str] = None, prob: float = 1)[source]
Bases:
SpectralTransform
Masks a band of frequencies at a center frequency from the audio.
Uses
audiotools.core.dsp.DSPMixin.mask_frequencies()
.- Parameters
f_center (tuple, optional) – Center frequency between 0.0 and 1.0 (Nyquist), by default (“uniform”, 0.0, 1.0)
f_width (tuple, optional) – Width of zero’d out band, by default (“const”, 0.1)
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.FrequencyNoise(f_center: tuple = ('uniform', 0.0, 1.0), f_width: tuple = ('const', 0.1), name: Optional[str] = None, prob: float = 1)[source]
Bases:
FrequencyMask
Similar to
audiotools.data.transforms.FrequencyMask()
, but replaces with noise instead of zeros.- Parameters
f_center (tuple, optional) – Center frequency between 0.0 and 1.0 (Nyquist), by default (“uniform”, 0.0, 1.0)
f_width (tuple, optional) – Width of zero’d out band, by default (“const”, 0.1)
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.GlobalVolumeNorm(db: tuple = ('const', -24), name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
BaseTransform
Similar to
audiotools.data.transforms.VolumeNorm()
, this transform also normalizes the volume of a signal, but it uses the volume of the entire audio file the loaded excerpt comes from, rather than the volume of just the excerpt. The volume of the entire audio file is expected insignal.metadata["loudness"]
. If loading audio from a CSV generated byaudiotools.data.preprocess.create_csv()
withloudness = True
, like the following:path
loudness
daps/produced/f1_script1_produced.wav
-16.299999237060547
daps/produced/f1_script2_produced.wav
-16.600000381469727
daps/produced/f1_script3_produced.wav
-17.299999237060547
daps/produced/f1_script4_produced.wav
-16.100000381469727
daps/produced/f1_script5_produced.wav
-16.700000762939453
daps/produced/f3_script1_produced.wav
-16.5
The
AudioLoader
will automatically load the loudness column into the metadata of the signal.Uses
audiotools.core.effects.EffectMixin.volume_change()
.- Parameters
db (tuple, optional) – dB to normalize signal to, by default (“const”, -24)
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.HighPass(cutoff: tuple = ('choice', [50, 100, 250, 500, 1000]), zeros: int = 51, name: Optional[str] = None, prob: float = 1)[source]
Bases:
BaseTransform
Applies a HighPass filter.
Uses
audiotools.core.dsp.DSPMixin.high_pass()
.- Parameters
cutoff (tuple, optional) – Cutoff frequency distribution, by default
("choice", [50, 100, 250, 500, 1000])
zeros (int, optional) – Number of zero-crossings in filter, argument to
julius.LowPassFilters
, by default 51name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.Identity(keys: list = [], name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
BaseTransform
This transform just returns the original signal.
- class audiotools.data.transforms.InvertPhase(name: Optional[str] = None, prob: float = 1)[source]
Bases:
ShiftPhase
Inverts the phase of the audio.
Uses
audiotools.core.dsp.DSPMixin.shift_phase()
.- Parameters
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.LowPass(cutoff: tuple = ('choice', [4000, 8000, 16000]), zeros: int = 51, name: Optional[str] = None, prob: float = 1)[source]
Bases:
BaseTransform
Applies a LowPass filter.
Uses
audiotools.core.dsp.DSPMixin.low_pass()
.- Parameters
cutoff (tuple, optional) – Cutoff frequency distribution, by default
("choice", [4000, 8000, 16000])
zeros (int, optional) – Number of zero-crossings in filter, argument to
julius.LowPassFilters
, by default 51name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.MaskLowMagnitudes(db_cutoff: tuple = ('uniform', -10, 10), name: Optional[str] = None, prob: float = 1)[source]
Bases:
SpectralTransform
Masks low magnitude regions out of signal.
Uses
audiotools.core.dsp.DSPMixin.mask_low_magnitudes()
.- Parameters
db_cutoff (tuple, optional) – Decibel value for which things below it will be masked away, by default (“uniform”, -10, 10)
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.MuLawQuantization(channels: tuple = ('choice', [8, 32, 128, 256, 1024]), name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
BaseTransform
Applies mu-law quantization to the input waveform. Corresponds to
audiotools.core.effects.EffectMixin.mulaw_quantization()
.- Parameters
channels (tuple, optional) – Number of mu-law spaced quantization channels to quantize to, by default (“choice”, [8, 32, 128, 256, 1024])
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.NoiseFloor(db: tuple = ('const', -50.0), name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
BaseTransform
Adds a noise floor of Gaussian noise to the signal at a specified dB.
- Parameters
db (tuple, optional) – Level of noise to add to signal, by default (“const”, -50.0)
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.Quantization(channels: tuple = ('choice', [8, 32, 128, 256, 1024]), name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
BaseTransform
Applies quantization to the input waveform. Corresponds to
audiotools.core.effects.EffectMixin.quantization()
.- Parameters
channels (tuple, optional) – Number of evenly spaced quantization channels to quantize to, by default (“choice”, [8, 32, 128, 256, 1024])
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.Repeat(transform, n_repeat: int = 1, name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
Compose
Repeatedly applies a given transform
n_repeat
times.”- Parameters
transform (BaseTransform) – Transform to repeat.
n_repeat (int, optional) – Number of times to repeat transform, by default 1
- class audiotools.data.transforms.RepeatUpTo(transform, max_repeat: int = 5, weights: Optional[list] = None, name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
Choose
Repeatedly applies a given transform up to
max_repeat
times.”- Parameters
transform (BaseTransform) – Transform to repeat.
max_repeat (int, optional) – Max number of times to repeat transform, by default 1
weights (list) – Probability of choosing any specific number up to
max_repeat
.
- class audiotools.data.transforms.RescaleAudio(val: float = 1.0, name: Optional[str] = None, prob: float = 1)[source]
Bases:
BaseTransform
Rescales the audio so it is in between
-val
andval
only if the original audio exceeds those bounds. Useful if transforms have caused the audio to clip.Uses
audiotools.core.effects.EffectMixin.ensure_max_of_audio()
.- Parameters
val (float, optional) – Max absolute value of signal, by default 1.0
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.RoomImpulseResponse(drr: tuple = ('uniform', 0.0, 30.0), sources: Optional[List[str]] = None, weights: Optional[List[float]] = None, eq_amount: tuple = ('const', 1.0), n_bands: int = 6, name: Optional[str] = None, prob: float = 1.0, use_original_phase: bool = False, offset: float = 0.0, duration: float = 1.0)[source]
Bases:
BaseTransform
Convolves signal with a room impulse response, at a specified direct-to-reverberant ratio, with equalization applied. Room impulse response data is drawn from a CSV file that was produced via
audiotools.data.preprocess.create_csv()
.This transform calls
audiotools.core.effects.EffectMixin.apply_ir()
under the hood.- Parameters
drr (tuple, optional) – _description_, by default (“uniform”, 0.0, 30.0)
sources (List[str], optional) – Sources containing folders, or CSVs with paths to audio files, by default None
weights (List[float], optional) – Weights to sample audio files from each source, by default None
eq_amount (tuple, optional) – Amount of equalization to apply, by default (“const”, 1.0)
n_bands (int, optional) – Number of bands in equalizer, by default 6
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
use_original_phase (bool, optional) – Whether or not to use the original phase, by default False
offset (float, optional) – Offset from each impulse response file to use, by default 0.0
duration (float, optional) – Duration of each impulse response, by default 1.0
- class audiotools.data.transforms.ShiftPhase(shift: tuple = ('uniform', -3.141592653589793, 3.141592653589793), name: Optional[str] = None, prob: float = 1)[source]
Bases:
SpectralTransform
Shifts the phase of the audio.
Uses
audiotools.core.dsp.DSPMixin.shift)phase()
.- Parameters
shift (tuple, optional) – How much to shift phase by, by default (“uniform”, -np.pi, np.pi)
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.Silence(name: Optional[str] = None, prob: float = 0.1)[source]
Bases:
BaseTransform
Zeros out the signal with some probability.
- Parameters
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 0.1
- class audiotools.data.transforms.Smoothing(window_type: tuple = ('const', 'average'), window_length: tuple = ('choice', [8, 16, 32, 64, 128, 256, 512]), name: Optional[str] = None, prob: float = 1)[source]
Bases:
BaseTransform
Convolves the signal with a smoothing window.
Uses
audiotools.core.effects.EffectMixin.convolve()
.- Parameters
window_type (tuple, optional) – Type of window to use, by default (“const”, “average”)
window_length (tuple, optional) – Length of smoothing window, by default (“choice”, [8, 16, 32, 64, 128, 256, 512])
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.SpectralDenoising(eq_amount: tuple = ('const', 1.0), denoise_amount: tuple = ('uniform', 0.8, 1.0), nz_volume: float = -40, n_bands: int = 6, n_freq: int = 3, n_time: int = 5, name: Optional[str] = None, prob: float = 1)[source]
Bases:
Equalizer
Applies denoising algorithm detailed in
audiotools.ml.layers.spectral_gate.SpectralGate()
, using a randomly generated noise signal for denoising.- Parameters
eq_amount (tuple, optional) – Amount of eq to apply to noise signal, by default (“const”, 1.0)
denoise_amount (tuple, optional) – Amount to denoise by, by default (“uniform”, 0.8, 1.0)
nz_volume (float, optional) – Volume of noise to denoise with, by default -40
n_bands (int, optional) – Number of bands in equalizer, by default 6
n_freq (int, optional) – Number of frequency bins to smooth by, by default 3
n_time (int, optional) – Number of time bins to smooth by, by default 5
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.SpectralTransform(keys: list = [], name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
BaseTransform
Spectral transforms require STFT data to exist, since manipulations of the STFT require the spectrogram. This just calls
stft
before the transform is called, and callsistft
after the transform is called so that the audio data is written to after the spectral manipulation.- transform(signal, **kwargs)[source]
Apply the transform to the audio signal, with given keyword arguments.
- Parameters
signal (AudioSignal) – Signal that will be modified by the transforms in-place.
kwargs (dict) – Keyword arguments to the specific transforms
self._transform
function.
- Returns
Transformed AudioSignal.
- Return type
Examples
>>> for seed in range(10): >>> kwargs = transform.instantiate(seed, signal) >>> output = transform(signal.clone(), **kwargs)
- class audiotools.data.transforms.TimeMask(t_center: tuple = ('uniform', 0.0, 1.0), t_width: tuple = ('const', 0.025), name: Optional[str] = None, prob: float = 1)[source]
Bases:
SpectralTransform
Masks out contiguous time-steps from signal.
Uses
audiotools.core.dsp.DSPMixin.mask_timesteps()
.- Parameters
t_center (tuple, optional) – Center time in terms of 0.0 and 1.0 (duration of signal), by default (“uniform”, 0.0, 1.0)
t_width (tuple, optional) – Width of dropped out portion, by default (“const”, 0.025)
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.TimeNoise(t_center: tuple = ('uniform', 0.0, 1.0), t_width: tuple = ('const', 0.025), name: Optional[str] = None, prob: float = 1)[source]
Bases:
TimeMask
Similar to
audiotools.data.transforms.TimeMask()
, but replaces with noise instead of zeros.- Parameters
t_center (tuple, optional) – Center time in terms of 0.0 and 1.0 (duration of signal), by default (“uniform”, 0.0, 1.0)
t_width (tuple, optional) – Width of dropped out portion, by default (“const”, 0.025)
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.VolumeChange(db: tuple = ('uniform', -12.0, 0.0), name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
BaseTransform
Changes the volume of the input signal.
Uses
audiotools.core.effects.EffectMixin.volume_change()
.- Parameters
db (tuple, optional) – Change in volume in decibels, by default (“uniform”, -12.0, 0.0)
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- class audiotools.data.transforms.VolumeNorm(db: tuple = ('const', -24), name: Optional[str] = None, prob: float = 1.0)[source]
Bases:
BaseTransform
Normalizes the volume of the excerpt to a specified decibel.
Uses
audiotools.core.effects.EffectMixin.normalize()
.- Parameters
db (tuple, optional) – dB to normalize signal to, by default (“const”, -24)
name (str, optional) – Name of this transform, used to identify it in the dictionary produced by
self.instantiate
, by default Noneprob (float, optional) – Probability of applying this transform, by default 1.0
- audiotools.data.transforms.tt()
Shorthand for converting things to torch.tensor.