Skrydata builds fast, scaleable cloud-enabled software that discovers insights in complex data

We use patented self-optimising, unsupervised algorithms to discover the hidden relationships within complex data without requiring any knowledge of the business or how the data was created

 

ProCess Discovery

Our unique process discovery technology reconstructs business and industrial processes directly from event data. It can identify bottlenecks, improve efficiency and identify variations to processes which may indicate unauthorised activity 

PREDICTION

Skrydata's predictive engine is highly self configuring. It combines machine learning, statistics and data mining to find the best possible predictors for a given goal. Applications include asset management, plant optimisation and churn prediction

     Learn More About Us

Lima01.jpg

FRAUD Detection

Fraud can be hard to detect. Our deviation analysis finds targets who deviate from the normal behavior of their peers without needing to know what those deviations look like. This makes it ideal for detecting new types of fraud and other behaviors

Process Model Discovery and Optimisation

 

Skrydata’s process analytics makes use of automated, efficient and scalable event log and data analysis methods, as well as unsupervised and supervised learning techniques to reconstruct individual workflow instances. These instances can then be clustered to identify specific processes and their respective variants. This reveals both the regular characteristics of the processes as well as unanticipated variations among workflow instances. The toolkit also allows these results to be compared with a user specified business goal (e.g. time performance) or specific process characteristics.

The toolkit can perform this analysis on raw audit logs, or the data itself (e.g. IVR event data, financial transaction processes, weblog data or industrial process event data ), while processes already stored using XML based templates (e.g. MXML, XES) can also be directly processed.

 

 

Self-Configuring, Self Optimising Model Building

 

Skrydata’s predictive model building is goal driven, meaning that the user specifies the desired outcome, and the toolkit will automatically self-configure to provide the best attainable results for the given problem. This is in contrast to other tools which require expert knowledge of predictive analytics procedure, parameter setting and suitable pre- and post-processing methods to be applied in order to achieve useful results. This allows businesses to save on cost by reducing reliance on expert data scientists, while at the same time allows for human interaction and easy incorporation of any domain knowledge including business constraints and measures.

Skrydata’s toolkit automates the majority of the following stages: pre-processing, data cleansing, data integration and structuring, model building and post-processing for model deployment. The advantage of Skrydata’s self-configuration is that it will achieve better predictive accuracy than other tools both under their default parameter settings, or even where the parameters are chosen by expert data scientists. This is because suitable method(s), learning parameters and the sequence of application of these methods to achieve the best results, vary across data and prediction goals.

Applications include Plant Optimisation, Asset Management, Predictive Maintenance, Churn Prediction and Customer Capture and Retention

 

 

Deviation Analysis and Ranking identifies entities whose behaviours vary from their peers

 

Deviation Analysis is Goal, not Hypothesis driven - you don’t have to know the behaviours you are looking for.  Deviation Analysis looks for deviating combinations of Properties. It doesnt focus on a particular Property such as Vendor, Date etc but finds unusual combinations of Properties, e.g. business meals on a Sunday in local currency.

  • Efficient Pattern matching techniques calculate deviation factors of chosen Properties of the Target’s Activities compared with their cohort
  •  Groups and orders Properties and their patterns or behaviours according to deviation factors

Deviation Ranking discriminates Targets who deviate from normal behaviour for their cohort across a number of Activities rather than just in one Activity. It allows the top deviating Targets to be identified across any specified combination of Activities using a range of functions and can include temporal aspects such as increasing frequency or amounts of transactions. 

 Skrydata’s DAR delivers

  • Focused Results – Investigate Targets who behave differently to their cohort, rather than hundreds or thousands of unusual transactions.
  • Real-Time Investigation – Investigators can try different combinations and filters in real time without lengthy database queries or data re-runs. (Data is pre-calculated)
  • Real-time drilldown - From within the toolkit, the user can drill down to any level in the data - even down to the original “raw” transaction information.

 

 

Standardisation, De-Duplication, Error Correction and Missing Value Imputation 

 

The Skrydata toolkit uses the latest algorithms in combination with proprietary techniques to provide superior performance in detecting and correction of:

  • Value inconsistencies – the toolkit performs standardisation of attribute values with respect to the expected domain of values, value format or measurement type and missing value imputation
  • Record Inconsistencies – the toolkit provides detection and resolution of records with attribute values that are individually valid but as a combination are invalid or inconsistent. Correction is based on the most probable value with respect to the expectations, or the norm. Organisational knowledge can be incorporated, in form of constraints or business rules to which the records should adhere. 
  • Duplicate Records – Duplicate record detection is performed through specialised similarity functions to account for all duplicate potentials, deliver a duplicate confidence measure and perform primary record identification from data itself, as well as providing merging options. This function optionally allows for domain expert/end user involvement to:
  1. select suitable attribute(s) for identifying duplicate records
  2. select verification modules used for measuring confidence of duplicate records
  3.  define policies for the selection of a primary record from a duplicate record group
  4. evaluate the impact of duplicate record merging and deletion
  5. define policies for merging duplicate records and
  6. specify any constraints on record deletion/merging.
  • Sources of data pollution By matching business process instances from the Process Discovery module with the corresponding records created, the following errors can be detected:
  1. database schema (e.g. constraint violations, contradictive constraints …);
  2. business logic (e.g. flawed validation logic); or
  3. issues with workflow execution (such as supervisor overrides, workflow design or failure to follow the as-designed workflow)

The toolkit can be used as a one-off analysis tool, or can be integrated with the business systems in order to provide supervised (or automatic) data cleansing and de-duplication on an ongoing basis.

Other pre-processing capabilities of the toolkit which assist in the data cleansing process and analytics functions, include numerical data discretisation, data combining/structuring, record filtering and irrelevant attribute filtering.