Andrew Polisetty - 2024 Poster Contest Resources
My name is Andrew Bala Abhilash Polisetty. I was born and raised in Hyderabad, India, where I completed my undergraduate studies at Geetanjali College of Engineering and Technology. I am pursuing my master’s in computer science at Kennesaw State University in Georgia. I came to the United States for advanced educational opportunities. In my undergraduate years, I gained knowledge of different programming languages like JAVA and Python and emerging technologies like artificial intelligence, machine learning, and the Internet of Things. I gained ample experience in artificial intelligence and machine learning by completing several courses online and doing other projects. Now, I am researching Model Inversion Attacks with the HPCC Systems Platform under my professor, Dr. Yong Shi. It has been a great experience with him, developing my skills and knowledge in this field. |
Poster Abstract
In this world of machine learning, feeding the model with inputs and training the model is one thing, and securing that model is another. Many companies use machine learning models for decision-making with sensitive data. Attackers target these data-sensitive models, and one of the biggest threats to these models from attackers is MIA (Model Inversion Attack). Mainly, it is used to reconstruct sensitive information such as financial data. The hackers test these models and gather predicted data, which is used as input for training a new model similar to the original one. The attackers can infer the sensitive data from the original model to reconstruct the training data or build a comparable model.   
We have taken a loan dataset and utilized the HPCC systems platform to perform black-box attacks and design solutions to prevent them. We first developed the machine learning model by spraying the dataset onto the ECL watch and then utilized it to build the original, threat and defender models. For the original model, we chose a credit risk assessment scenario consisting of a person’s loan and personal details. We developed the original model by using learning trees from the HPCC systems machine learning bundle. The attackers can access the inputs and outputs of the model through querying, where they can analyze the data points to perform a black-box attack. In the attack model, we simulate an attack by querying the original model, and we train the attack model using its output. For the prevention model, we are exploring different types. For instance, we have added noise to the output of the original model to manipulate the attacker. The raw accuracy for using learning trees is 72%. The results from the learning trees model are good, but we moved to the logistic regression model for more efficiency.
We have implemented the prediction, attack, and prevention models in logistic regression. The raw accuracy for the original model is 86%, which is a good improvement from the learning trees model. For the attack model, it is trained using the output of the prediction model. The raw accuracy of the model is 85%, which is not at all good. We implemented the prevention model by adding noise to 30% of the prediction data. Value 0 is flipped to 1 and 1 to 0; by doing this intentionally for the top 30% of records. This may reduce the accuracy or the performance of the model, but it will prevent the attacker from analyzing data points. The raw accuracy of the model after adding the noise is 64%.
The current prevention model has both advantages and disadvantages. Our focus is to enhance the model by protecting sensitive data without compromising performance and accuracy. We will work on different prevention methods further and find the best possible solution.
Presentation
In this Video Recording, Andrew provides a tour and explanation of his poster content.
Model Inversion Attack with HPCC Systems Platform:
Click on the poster for a larger image.
Â
All pages in this wiki are subject to our site usage guidelines.