Architect’s Guide for Continuous Machine Learning Platforms With Apache Ignite 2.8

Many machine learning (ML) and deep learning (DL) platforms are slow in production environments. It can sometimes take hours or days to update ML models. This is a result of having the ML processing run on a different system from the operational transactions system in order to avoid a performance degradation.

Join us for this webinar to learn how to overcome these challenges by leveraging the Apache Ignite ML framework to implement a continuous machine learning (CML) platform. The CML platform can run the ML compute code on the same cluster that has the transactional data without having a performance impact on the transactions system. As a result, ML models can be updated in real-time using the latest available data.

Topics covered will include:

  • An overview of massively distributed ML/DL architectures including design, implementation, usage patterns, and cost / benefit analysis
  • Detailed coverage of Apache Ignite ML/DL Pipeline steps - from preprocessing to real-time prediction
  • Discussion of out-of-the-box algorithms and adapters that can leverage third party algorithms such as Spark, XGBoost, TensorFlow, and custom code
  • Detailed code examples and a demo that shows how to use Apache Ignite 2.8 ML framework for continuous learning tasks
Speakers
Ken Cottrell
Ken Cottrell
Solution Architect at GridGain Systems

I’ve been working with distributed computing tools and platforms for 25 years, in both presales and post sales technical roles. I’ve provided technical advisory and consulting services to customers in areas including object-oriented data modeling, data-driven business process integration, and advanced analytics tools and platforms.

These last few years I’ve been advising architects and developers on the use of big data engineering and Machine Learning tools and processes. My role at Gridgain is as a subject matter expert in the data engineering aspects of distributed Machine Learning.