Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This project was completed as a intern opportunity with HPCC Systems in 2016. Curious about projects we are offering for future internships? Take a look at our Ideas List

Find out about the HPCC Systems Summer Internship Program.

Project Description

SVD has many applications. For example, SVD could be applied to natural language processing for latent semantic analysis (LSA). LSA starts with a matrix whose rows represent words, columns represent documents, and matrix values (elements) are counts of the word in the document. It then applies SVD to the input matrix, and uses a subset of most significant singular vectors and corresponding singular values to map words and documents into a new space, called ‘latent semantic space’, where documents are placed near each other measured by co-occurrence of words, even if those words never co-occurred in the training corpus.
The LSA’s notion of term-document similarity can be applied to information retrieval, creating a system known as Latent Semantic Indexing (LSI). An LSI system calculates similarity several terms provided in a query have with documents by creating k-dimensional query vector as a sum of k-dimensional vector representations of individual terms, and comparing it to the k-dimensional document vectors.

The implementation can be done completely in the ECL language and only a knowledge of ECL and distributed computing techniques is required.   A knowledge of linear algebra will be helpful.

Completion of this project involves:

  • Generation Selection of the test data. Three sets of The test data are required, one for each of the three test cases:
  • Somewhat uniform distribution where each node has data from the entire range.
  • Skewed data where at least half of the nodes do not have observations in at least 50% of the range.
  • Highly skewed data where range overlaps do not occur.will be a collection of open data text documents.  The collection must have an open data license or be completely free of copyright restrictions.  The most important aspect of the collection is that you will be familiar with the subjects in the collection so that you can judge the effectiveness of your implementation.  The test text collection should be composed of 1000 to 10000 documents.  
  • Development of the algorithm using ECL.
  • Testing the algorithm for correctness and performance, which involves comparing the approximate solution to the exact solution and validating that that the results are within the tolerance specified.

By the GSoC mid term review we would expect you to have written :

  • Written the ECL needed to

...

  • process the text documents into a dataset of term vectors.
Mentor

John Holt
Contact

details: 

Details

Backup Mentor: Edin Muharemagic
Contact Details  

Skills needed
  • Knowledge of ECL. Training manuals and online courses are available on the HPCC Systems website.
  • Knowledge of distributed computing techniques
Deliverables
  • Test code demonstrating the correctness and performance of the algorithm.
  • Supporting documentation.
Other resources