|
DayF core
1.2.1.2
DayF (Decision at your Fingertips) is an AutoML freeware development framework that let developers works with Machine Learning models without any idea of AI, simply taking a csv dataset and the objective column
|


Public Member Functions | |
| def | __init__ (self, e_c) |
| Constructor Initialize all framework variables and starts or connect to spark cluster Aditionally starts PersistenceHandler and logsHandler. More... | |
| def | __del__ (self) |
| Destructor. | |
| def | shutdown_cluster (cls) |
| Class Method for cluster shutdown. More... | |
| def | connect (self) |
| Connexion_method to cluster If cluster is up connect to cluster on another case start a cluster. More... | |
| def | is_alive (self) |
| Is alive_connection method. | |
| def | get_external_model (self, ar_metadata, type) |
| Generate pdml model class_. More... | |
| def | delete_frames (self) |
| Not Used: Remove used dataframes during analysis execution_. More... | |
| def | generate_base_path (self, base_ar, type_) |
| Generate base path to store all files [models, logs, json] relative to it. More... | |
| def | get_metric (self, algorithm_description, metric, source) |
| Get one especific metric for execution metrics Not tested yet. More... | |
| def | execute_normalization (self, dataframe, base_ns, model_id, filtering='NONE', exist_objective=True) |
| Method to execute normalizations base on params. More... | |
| def | define_special_spark_naive_norm (self, df_metadata) |
| Method to generate special normalizations for Naive non negative work restrictions. More... | |
| def | order_training (self, training_pframe, base_ar, kwargs) |
| Main method to execute sets of analysis and normalizations base on params. More... | |
| def | store_model (self, armetadata) |
| Method to save model to persistence layer from armetadata. More... | |
| def | load_model (self, armetadata) |
| Method to load model from persistence layer by armetadata. More... | |
| def | predict (self, predict_frame, base_ar, kwargs) |
| Main method to execute predictions over traning models Take the ar.json for and execute predictions including its metrics a storage paths. More... | |
| def | remove_models (self, arlist) |
| Method to remove list of model from disk. More... | |
Public Attributes | |
| localfs | |
| hdfs | |
| mongoDB | |
| primary_path | |
| url | |
| nthreads | |
| spark_warehouse_dir | |
| spark_executor_mem | |
| spark_driver_mem | |
| start_spark | |
Definition at line 72 of file sparkhandler.py.
| def gdayf.handlers.sparkhandler.sparkHandler.__init__ | ( | self, | |
| e_c | |||
| ) |
Constructor Initialize all framework variables and starts or connect to spark cluster Aditionally starts PersistenceHandler and logsHandler.
| self | object pointer |
| e_c | context pointer |
Definition at line 79 of file sparkhandler.py.
| def gdayf.handlers.sparkhandler.sparkHandler.connect | ( | self | ) |
Connexion_method to cluster If cluster is up connect to cluster on another case start a cluster.
Definition at line 136 of file sparkhandler.py.
| def gdayf.handlers.sparkhandler.sparkHandler.define_special_spark_naive_norm | ( | self, | |
| df_metadata | |||
| ) |
Method to generate special normalizations for Naive non negative work restrictions.
| self | object pointer |
| dataframe | pandas dataframe |
Definition at line 589 of file sparkhandler.py.

| def gdayf.handlers.sparkhandler.sparkHandler.delete_frames | ( | self | ) |
Not Used: Remove used dataframes during analysis execution_.
| self | object pointer Not implemented |
Definition at line 238 of file sparkhandler.py.
| def gdayf.handlers.sparkhandler.sparkHandler.execute_normalization | ( | self, | |
| dataframe, | |||
| base_ns, | |||
| model_id, | |||
filtering = 'NONE', |
|||
exist_objective = True |
|||
| ) |
Method to execute normalizations base on params.
| self | object pointer |
| dataframe | pandas dataframe |
| base_ns | NormalizationMetadata orderedDict() compatible |
| model_id | base model identificator |
| filtering | STANDARDIZE if standardize filtering rules need to be applied or DROP drop_columns filtering rules need to be applied |
| exist_objective | True if exist False if not |
Definition at line 559 of file sparkhandler.py.

| def gdayf.handlers.sparkhandler.sparkHandler.generate_base_path | ( | self, | |
| base_ar, | |||
| type_ | |||
| ) |
Generate base path to store all files [models, logs, json] relative to it.
| self | object pointer |
| base_ar | initial ar.json template pass to object instance |
| type_ | type of analysis to execute |
Definition at line 250 of file sparkhandler.py.


| def gdayf.handlers.sparkhandler.sparkHandler.get_external_model | ( | self, | |
| ar_metadata, | |||
| type | |||
| ) |
Generate pdml model class_.
| self | object pointer |
| ar_metadata | ArMetadata stored model |
| type | ['pojo', 'mojo'] |
Definition at line 193 of file sparkhandler.py.
| def gdayf.handlers.sparkhandler.sparkHandler.get_metric | ( | self, | |
| algorithm_description, | |||
| metric, | |||
| source | |||
| ) |
Get one especific metric for execution metrics Not tested yet.
| algorithm_description | (subclass executionmetricscollection) or compatible OrderedDict() |
| metric | String metric key name |
| source | [train, val, xval] @ return (Variable) metrics value or String "Not Found" |
Definition at line 539 of file sparkhandler.py.
| def gdayf.handlers.sparkhandler.sparkHandler.load_model | ( | self, | |
| armetadata | |||
| ) |
Method to load model from persistence layer by armetadata.
| armetadata | structure to be stored return armetadata if model loaded successfully or None if not loaded |
Definition at line 1068 of file sparkhandler.py.

| def gdayf.handlers.sparkhandler.sparkHandler.order_training | ( | self, | |
| training_pframe, | |||
| base_ar, | |||
| kwargs | |||
| ) |
Main method to execute sets of analysis and normalizations base on params.
| self | object pointer |
| training_pframe | pandas.DataFrame |
| base_ar | ar_template.json |
| **kwargs | extra arguments |
Definition at line 603 of file sparkhandler.py.

| def gdayf.handlers.sparkhandler.sparkHandler.predict | ( | self, | |
| predict_frame, | |||
| base_ar, | |||
| kwargs | |||
| ) |
Main method to execute predictions over traning models Take the ar.json for and execute predictions including its metrics a storage paths.
| self | object pointer |
| predict_frame | pandas.DataFrame |
| base_ar | ArMetadata or compatible tuple (OrderedDict(), OrderedDict()) |
| **kwargs | extra arguments |
Definition at line 1088 of file sparkhandler.py.

| def gdayf.handlers.sparkhandler.sparkHandler.remove_models | ( | self, | |
| arlist | |||
| ) |
Method to remove list of model from disk.
| self | Object pointer |
| arlist | List of ArMetadata |
Definition at line 1307 of file sparkhandler.py.
| def gdayf.handlers.sparkhandler.sparkHandler.shutdown_cluster | ( | cls | ) |
Class Method for cluster shutdown.
| cls | class pointer Not implemented |
Definition at line 111 of file sparkhandler.py.
| def gdayf.handlers.sparkhandler.sparkHandler.store_model | ( | self, | |
| armetadata | |||
| ) |
Method to save model to persistence layer from armetadata.
| armetadata | structure to be stored return saved_model (True/False) |
Definition at line 1012 of file sparkhandler.py.


1.8.13