BASE PAPER TITLE:
Preventing Private
Information Inference Attacks on Social Networks
OR
PROPOSED TITLE:
Overcoming Information
Inference Attacks and Protecting Shared Data in OSN
BASE PAPER ABSTRACT:
Online social networks, such as Facebook, are increasingly
utilized by many people. These networks allow users to publish details about
themselves and to connect to their friends. Some of the information revealed
inside these networks is meant to be private. Yet it is possible to use
learning algorithms on released data to predict private information. In this
paper, we explore how to launch inference attacks using released social
networking data to predict private information. We then devise three possible
sanitization techniques that could be used in various situations. Then, we
explore the effectiveness of these techniques and attempt to use methods of
collective inference to discover sensitive attributes of the data set. We show
that we can decrease the effectiveness of both local and relational
classification algorithms by using the sanitization methods we described.
PROPOSED ABSTRACT (OUR CONTRIBUTION):
Online Social Networks offer for digital social
interactions and information sharing, but it includes security and privacy
issues. OSNs allow users to restrict access to shared data; OSNs currently do
not provide any mechanism to enforce privacy concerns over data associated with
multiple users. To overcome this, we propose an approach which supports the
protection of shared data associated with multiple users in OSNs. We are
developing an access control model to capture the core of multiparty
authorization requirements, along with a multiparty policy specification scheme
and a policy enforcement mechanism.
EXISTING SYSTEM:
Other papers have tried to infer private information
inside social networks. In, He et al. consider ways to infer private
information via friendship links by creating a Bayesian network from the links
inside a social network. While they crawl a real social network, Live Journal,
they use hypothetical attributes to analyze their learning algorithm.
The existing work could model and analyze access
control requirements with respect to collaborative authorization management of
shared data in OSNs. The need of joint management for data sharing, especially
photo sharing, in OSNs has been recognized by the recent work provided a
solution for collective privacy management in OSNs. Their work considered
access control policies of a content that is co-owned by multiple users in an
OSN, such that each co-owner may separately specify her/his own privacy
preference for the shared content.
DISADVANTAGES
OF EXISTING SYSTEM:
This problem of private information leakage could be
an important issue in some cases.
PROPOSED SYSTEM:
This paper focuses on the problem of private
information leakage for individuals as a direct result of their actions as being
part of an online social network. We model an attack scenario as follows:
Suppose Facebook wishes to release data to electronic arts for their use in
advertising games to interested people. However, once electronic arts has this data,
they want to identify the political affiliation of users in their data for
lobbying efforts. Because they would not only use the names of those
individuals who explicitly list their affiliation, but also—through
inference—could determine the affiliation of other users in their data, this
would obviously be a privacy violation of hidden details. We explore how the online
social network data could be used to predict some individual private detail
that a user is not willing to disclose (e.g., political or religious
affiliation, sexual orientation) and explore the effect of possible data sanitization
approaches on preventing such private information leakage, while allowing the
recipient of the sanitized data to do inference on non-private details.
In Proposed System we implemented a proof-of-concept
Facebook application for the collaborative management of shared data, called MController.
Our prototype application enables multiple associated users to specify
their authorization policies and privacy preferences to co-control a shared
data item.
ADVANTAGES
OF PROPOSED SYSTEM:
To the best of our knowledge, this is the first
paper that discusses the problem of sanitizing a social network to prevent
inference of social network data and then examines the effectiveness of those
approaches on a real-world data set. In order to protect privacy, we sanitize
both details and the underlying link structure of the graph. That is, we delete
some information from a user’s profile and remove some links between friends.
We also examine the effects of generalizing detail values to more generic
values.
SYSTEM CONFIGURATION:-
HARDWARE CONFIGURATION:-
ü Processor - Pentium –IV
ü Speed - 1.1
Ghz
ü RAM - 256
MB(min)
ü Hard Disk -
20 GB
ü Key Board -
Standard Windows Keyboard
ü Mouse - Two
or Three Button Mouse
ü Monitor - SVGA
SOFTWARE CONFIGURATION:-
ü Operating System : Windows XP
ü Programming Language :
JAVA/J2EE.
ü Java Version :
JDK 1.6 & above.
ü Database :
MYSQL
REFERENCE:
Raymond Heatherly, Murat Kantarcioglu,
and Bhavani Thuraisingham, Fellow, IEEE, “Preventing Private Information
Inference Attacks on Social Networks”, IEEE
TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 25, NO. 8, AUGUST 2013.