mydatapreprocessing

Load, consolidate and preprocess data in simplest possible way.

Py versions PyPI package Downloads Jupyter MyBinder Language grade: Python Documentation Status License: MIT Codecov

Load data from web link or local file (json, csv, excel file, parquet, h5…), consolidate it (resample data, clean NaN values, do string embedding) derive new features via columns derivation and do preprocessing like standardization or smoothing. If you want to see how functions works, check it’s docstrings - working examples with printed results are also in tests - visual.py.

Installation

Python >= 3.6 (Python 2 is not supported).

Install just with:

pip install mydatapreprocessing

There are some libraries that not every user will be using (for some specific data inputs for example). If you want to be sure to have all libraries, you can provide extras requirements like:

pip install mydatapreprocessing[datatypes]

Available extras are [“all”, “datasets”, “datatypes”]

Examples:

>>> import mydatapreprocessing as mdp

Load data

You can use

  • python formats (numpy.ndarray, pd.DataFrame, list, tuple, dict)
  • local files
  • web urls

Supported path formats are:

  • csv
  • xlsx and xls
  • json
  • parquet
  • h5

You can load more data at once in list.

Syntax is always the same.

>>> data = mdp.load_data.load_data(
...     "https://raw.githubusercontent.com/jbrownlee/Datasets/master/daily-min-temperatures.csv",
... )
>>> # data2 = mdp.load_data.load_data([PATH_TO_FILE.csv, PATH_TO_FILE2.csv])

Consolidation

If you want to use data for some machine learning models, you will probably want to remove Nan values, convert string columns to numeric if possible, do encoding or keep only numeric data and resample.

Consolidation is working with pandas DataFrame as column names matters here.

There are many functions, but there is main function pipelining other functions consolidate_data

>>> data = mdp.load_data.load_data(r"https://raw.githubusercontent.com/jbrownlee/Datasets/master/daily-min-temperatures.csv")
...
>>> consolidation_config = mdp.consolidation.consolidation_config.default_consolidation_config.do.copy()
>>> consolidation_config.datetime.datetime_column = 'Date'
>>> consolidation_config.resample.resample = 'M'
>>> consolidation_config.resample.resample_function = "mean"
>>> consolidation_config.dtype = 'float32'
...
>>> consolidated = mdp.consolidation.consolidate_data(data, consolidation_config)
>>> consolidated.head()
                 Temp
Date
1981-01-31  17.712904
1981-02-28  17.678572
1981-03-31  13.500000
1981-04-30  12.356667
1981-05-31   9.490322

In config, you can use shorter update dict syntax as all values names are unique.

Feature engineering

Create new columns that can be for example used as another machine learning model input.

>>> import mydatapreprocessing as mdp
>>> import mydatapreprocessing.feature_engineering as mdpf
>>> import pandas as pd
...
>>> data = pd.DataFrame(
...     [mdp.datasets.sin(n=30), mdp.datasets.ramp(n=30)]
... ).T
...
>>> extended = mdpf.add_derived_columns(data, differences=True, rolling_means=10)
>>> extended.columns
Index([                      0,                       1,
              '0 - Difference',        '1 - Difference',
       '0 - Second difference', '1 - Second difference',
        'Multiplicated (0, 1)',      '0 - Rolling mean',
            '1 - Rolling mean',       '0 - Rolling std',
             '1 - Rolling std',     '0 - Mean distance',
           '1 - Mean distance'],
      dtype='object')
>>> len(extended)
21

Functions in feature_engineering and preprocessing expects that data are in form (n_samples, n_features). n_samples are usually much bigger and therefore transformed in consolidate_data if necessary.

Preprocessing

Preprocessing can be used on pandas DataFrame as well as on numpy array. Column names are not important as it’s just matrix with defined dtype.

There is many functions, but there is main function pipelining other functions preprocess_data Preprocessed data can be converted back with preprocess_data_inverse

>>> import numpy as np
>>> import pandas as pd
...
>>> from mydatapreprocessing import preprocessing as mdpp
...
>>> df = pd.DataFrame(np.array([range(5), range(20, 25), np.random.randn(5)]).astype("float32").T)
>>> df.iloc[2, 0] = 500
...
>>> config = mdpp.preprocessing_config.default_preprocessing_config.do.copy()
>>> config.do.update({"remove_outliers": None, "difference_transform": True, "standardize": "standardize"})
...
>>> data_preprocessed, inverse_config = mdpp.preprocess_data(df.values, config)
>>> data_preprocessed
array([[ 0.       ,  0.       ,  0.2571587],
       [ 1.4142135,  0.       , -0.633448 ],
       [-1.4142135,  0.       ,  1.5037845],
       [ 0.       ,  0.       , -1.1274952]], dtype=float32)

If using for prediction, default last value is used for inverse transform. Here for test using first value is used to check whether original data will be restored.

>>> inverse_config.difference_transform = df.iloc[0, 0]
>>> data_preprocessed_inverse = mdpp.preprocess_data_inverse(
...     data_preprocessed[:, 0], inverse_config
... )
>>> data_preprocessed_inverse
array([  1., 500.,   3.,   4.], dtype=float32)
>>> np.allclose(df.values[1:, 0], data_preprocessed_inverse, atol=1.0e-5)
True

Submodules

consolidation

Consolidate data. Consolidation means that output is somehow standardized and you know that it will be working in your algorithms even when data are not known beforehand. It includes for example shape verification, string embedding, setting datetime index, resampling or NaN cleaning.

You can consolidate data with consolidate_data and prepare it for example for machine learning models.

There are many small functions that you can use separately in consolidation_functions, but there is main pipeline function consolidate_data that calls all the functions based on config for you.

Functions usually use DataFrame as consolidation is first phase of data preparation and columns names are still important here.

There is an ‘inplace’ parameter on many places. This means, that it change your original data, but syntax is bit different as it will return anyway, so use for example df = consolidation_function(df, inplace=True)

database

Process database. Read or write.

It is working only for mssql server so far.

datasets

Test data definition.

Data can be used for example for validating machine learning time series prediction results.

Only ‘real’ data are ECG heart signal returned with function get_ecg().

feature_engineering

Extract new features from available data.

You can add new derived columns. This new generated data can help to machine learning models to better results.

In add_derived_columns you can add first and second derivations, multiplication of columns, rolling means and rolling standard deviation.

In add_frequency_columns you can add fast fourier transform results maximums on running window.

helpers

Helper functions that are used across all library.

It’s made mostly for internal use, but finally added to public API as it may be helpful.

load_data

This module helps you to load data from path as well as from web url in various formats.

Supported path formats are:

  • csv
  • xlsx and xls
  • json
  • parquet
  • h5

You can insert more files (urls) at once and your data will be automatically concatenated.

Main function is load_data where you can find working examples.

There is also function get_file_paths which open an dialog window in your operation system and let you choose your files in convenient way. This tuple output you can then insert into load_data.

misc

Miscellaneous functions that do not fit into other modules.

You can find here for example functions for train / test split, function for rolling windows, function that clean the DataFrame for print as table or function that will add gaps to time series data where are no data so two remote points are not joined in plot.

preprocessing

Subpackage for data preprocessing.

Preprocessing means for example standardization, data smoothing, outliers removal or binning.

There are many small functions that you can use separately, but there is main function preprocess_data that call all the functions based on input params for you. For inverse preprocessing use preprocess_data_inverse.

Functions are available for pd.DataFrame as well as numpy array. Output is usually of the same type as an input. Functions can be use inplace or copy can be created.

Note

In many functions, there is main column necessary for correct functioning. It’s supposed, that this column is on index 0 as first column. If using consolidation, use first_column param. Or use move_on_first_column manually.

types

Place for storing the types used across the library. Type for data input format for example.

mydatapreprocessing.types.DataFrameOrArrayGeneric

Many functions works for numpy arrays as well as for pandas DataFrame. The same type as on input is returned on output usually.

Type:typing.TypeVar
mydatapreprocessing.types.Numeric

Define basic numeric type usually used in computation. Union of float, int and numpy.number.

Type:typing.TypeAlias
mydatapreprocessing.types.PandasIndex

Index that can be used in this library in function parameter. It can be str, but also int index. It’s usually narrowed to str | pd.Index afterwards so it can be used to access column with the same syntax as with columns name.

Type:typing.TypeAlias
mydatapreprocessing.types.DataFormat
Type:typing.TypeAlias