Distributed Neural Network Training and Prediction

This project is already taken and is no longer available for the 2023 HPCC Systems Intern Program

This project is available as a student work experience opportunity with HPCC Systems. Curious about other projects we are offering? Take a look at our Ideas List.

Student work experience opportunities also exist for students who want to suggest their own project idea. Project suggestions must be relevant to HPCC Systems and of benefit to our open source community. 

Find out about the HPCC Systems Summer Internship Program.

Project Description

Neural Networks have become a key mechanism for the analysis of many types of data.  In particular they have been found to be very effective for the analysis of complex datasets such as images, video, and time-series, where classical methods have proven inadequate. The Generalized Neural Network Bundle (GNN) allows the ECL programmer to combine the parallel processing power of HPCC Systems with the powerful Neural Network capabilities of Keras and Tensorflow. The GNN bundle attaches each node in the HPCC Systems cluster to an independent Keras/Tensorflow environment and coordinates among those environments to provide a distributed environment that can parallelize all phases of Keras/Tensorflow usage. Most importantly, this coordination is transparent to the GNN user, who can program as if running on a single node.

Despite the GNN implementation, endless variations and combinations of the neural network techniques continue to be proposed in order to push the state of the art in machine learning and optimization. This project will research, implement, and evaluate alternative methods for distributed training of Neural Networks. Then, implement the most promising methods on the HPCC Systems Platform using the GNN bundle.

This work involves the design and implementation of alternative distribution models for parallelized training and evaluation of neural networks using the HPCC Systems GNN bundle and Tensorflow. The student will evaluate the various methods, and document the new capabilities, including guidance as to which algorithms are preferable for different scenarios.

If you are interested in this project, please contact the mentor shown below.

 

Mentor

Lili Xu
lili.xu@lexisnexisrisk.com

Backup Mentor: Roger Dev
roger.dev@lexisnexisrisk.com

Skills needed
  • Ability to build and test the HPCC system (guidance will be provided).

  • Ability to write test code.

  • Knowledge of ECL. Links are provided below to our ECL training documentation and online courses for you to become familiar with the ECL language.

Other resources

All pages in this wiki are subject to our site usage guidelines.