ieee projects 2012 2013 - network security
DESCRIPTION
ieee projects download, base paper for ieee projects, ieee projects list, ieee projects titles, ieee projects for cse, ieee projects on networking,ieee projects 2012, ieee projects 2013, final year project, computer science final year projects, final year projects for information technology, ieee final year projects, final year students projects, students projects in java, students projects download, students projects in java with source code, students projects architecture, free ieee papersTRANSCRIPT
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
IEEE FINAL YEAR PROJECTS 2012 – 2013
Network Security
Corporate Office: Madurai
227-230, Church road, Anna nagar, Madurai – 625 020.
0452 – 4390702, 4392702, +9199447933980
Email: [email protected], [email protected]
Website: www.elysiumtechnologies.com
Branch Office: Trichy
15, III Floor, SI Towers, Melapudur main road, Trichy – 620 001.
0431 – 4002234, +919790464324.
Email: [email protected], [email protected].
Website: www.elysiumtechnologies.com
Branch Office: Coimbatore
577/4, DB Road, RS Puram, Opp to KFC, Coimbatore – 641 002.
+919677751577
Website: Elysiumtechnologies.com, Email: [email protected]
Branch Office: Kollam
Surya Complex, Vendor junction, Kollam – 691 010, Kerala.
0474 – 2723622, +919446505482.
Email: [email protected].
Website: www.elysiumtechnologies.com
Branch Office: Cochin
4th
Floor, Anjali Complex, near south over bridge, Valanjambalam,
Cochin – 682 016, Kerala.
0484 – 6006002, +917736004002.
Email: [email protected], Website: www.elysiumtechnologies.com
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4201
EGC
4202
EGC
4203
NETWORK SECURITY 2012 - 2013
There is an increasing need for fault tolerance capabilities in logic devices brought about by the scaling of transistors to
ever smaller geometries. This paper presents a hypervisor-based replication approach that can be applied to commodity
hardware to allow for virtually lockstepped execution. It offers many of the benefits of hardware-based lockstep while
being cheaper and easier to implement and more flexible in the configurations supported. A novel form of processor
state fingerprinting is also presented, which can significantly reduce the fault detection latency. This further improves
reliability by triggering rollback recovery before errors are recorded to a checkpoint. The mechanisms are validated
using a full prototype and the benchmarks considered indicate an average performance overhead of approximately 14
percent with the possibility for significant optimization. Finally, a unique method of using virtual lockstep for fault
injection testing is presented and used to show that significant detection latency reduction is achievable by comparing
only a small amount of data across replicas.
Trust Negotiation has shown to be a successful, policy-driven approach for automated trust establishment, through the
release of digital credentials. Current real applications require new flexible approaches to trust negotiations, especially
in light of the widespread use of mobile devices. In this paper, we present a multisession dependable approach to trust
negotiations. The proposed framework supports voluntary and unpredicted interruptions, enabling the negotiating
parties to complete the negotiation despite temporary unavailability of resources. Our protocols address issues related
to validity, temporary loss of data, and extended unavailability of one of the two negotiators. A peer is able to suspend
an ongoing negotiation and resume it with another (authenticated) peer. Negotiation portions and intermediate states
can be safely and privately passed among peers, to guarantee the stability needed to continue suspended negotiations.
We present a detailed analysis showing that our protocols have several key properties, including validity, correctness,
and minimality. Also, we show how our negotiation protocol can withstand the most significant attacks. As by our
complexity analysis, the introduction of the suspension and recovery procedures, and mobile negotiations does not
significantly increase the complexity of ordinary negotiations. Our protocols require a constant number of messages
whose size linearly depend on the portion of trust negotiation that has been carried before the suspensions.
Despite the conventional wisdom that proactive security is superior to reactive security, we show that reactive security
can be competitive with proactive security as long as the reactive defender learns from past attacks instead of
A Flexible Approach to Improving System Reliability with Virtual Lockstep
A Flexible Approach to Multisession Trust Negotiations
A Learning-Based Approach to Reactive Security
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4206
EGC
4205
EGC
4204
myopically overreacting to the last attack. Our game-theoretic model follows common practice in the security literature
by making worst case assumptions about the attacker: we grant the attacker complete knowledge of the defender's
strategy and do not require the attacker to act rationally. In this model, we bound the competitive ratio between a
reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense.
Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the
attacker's incentives and knowledge.
Understanding the spreading dynamics of computer viruses (worms, attacks) is an important research problem, and has
received much attention from the communities of both computer security and statistical physics. However, previous
studies have mainly focused on single-virus spreading dynamics. In this paper, we study multivirus spreading
dynamics, where multiple viruses attempt to infect computers while possibly combating against each other because, for
example, they are controlled by multiple botmasters. Specifically, we propose and analyze a general model (and its two
special cases) of multivirus spreading dynamics in arbitrary networks (i.e., we do not make any restriction on network
topologies), where the viruses may or may not coreside on computers. Our model offers analytical results for
addressing questions such as: What are the sufficient conditions (also known as epidemic thresholds) under which the
multiple viruses will die out? What if some viruses can "rob” others? What characteristics does the multivirus epidemic
dynamics exhibit when the viruses are (approximately) equally powerful? The analytical results make a fundamental
connection between two types of factors: defense capability and network connectivity. This allows us to draw various
insights that can be used to guide security defense.
Significant work on vulnerabilities focuses on buffer overflows, in which data exceeding the bounds of an array is
loaded into the array. The loading continues past the array boundary, causing variables and state information located
adjacent to the array to change. As the process is not programmed to check for these additional changes, the process
acts incorrectly. The incorrect action often places the system in a nonsecure state. This work develops a taxonomy of
buffer overflow vulnerabilities based upon characteristics, or preconditions that must hold for an exploitable buffer
overflow to exist. We analyze several software and hardware countermeasures to validate the approach. We then
discuss alternate approaches to ameliorating this vulnerability.
Security attacks typically result from unintended behaviors or invalid inputs. Security testing is labor intensive because
a real-world program usually has too many invalid inputs. It is highly desirable to automate or partially automate
A Taxonomy of Buffer Overflow Characteristics
Automated Security Test Generation with Formal Threat Models
A Stochastic Model of Multivirus Dynamics
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4208
EGC
4207
security-testing process. This paper presents an approach to automated generation of security tests by using formal
threat models represented as Predicate/Transition nets. It generates all attack paths, i.e., security tests, from a threat
model and converts them into executable test code according to the given Model-Implementation Mapping (MIM)
specification. We have applied this approach to two real-world systems, Magento (a web-based shopping system being
used by many online stores) and FileZilla Server (a popular FTP server implementation in C++). Threat models are built
systematically by examining all potential STRIDE (spoofing identity, tampering with data, repudiation, information
disclosure, denial of service, and elevation of privilege) threats to system functions. The security tests generated from
these models have found multiple security risks in each system. The test code for most of the security tests can be
generated and executed automatically. To further evaluate the vulnerability detection capability of the testing approach,
the security tests have been applied to a number of security mutants where vulnerabilities are injected deliberately. The
mutants are created according to the common vulnerabilities in C++ and web applications. Our experiments show that
the security tests have killed the majority of the mutants.
Byzantine-fault-tolerant replication enhances the availability and reliability of Internet services that store critical state
and preserve it despite attacks or software errors. However, existing Byzantine-fault-tolerant storage systems either
assume a static set of replicas, or have limitations in how they handle reconfigurations (e.g., in terms of the scalability of
the solutions or the consistency levels they provide). This can be problematic in long-lived, large-scale systems where
system membership is likely to change during the system lifetime. In this paper, we present a complete solution for
dynamically changing system membership in a large-scale Byzantine-fault-tolerant system. We present a service that
tracks system membership and periodically notifies other system nodes of membership changes. The membership
service runs mostly automatically, to avoid human configuration errors; is itself Byzantine-fault-tolerant and
reconfigurable; and provides applications with a sequence of consistent views of the system membership. We
demonstrate the utility of this membership service by using it in a novel distributed hash table called dBQS that
provides atomic semantics even across changes in replica sets. dBQS is interesting in its own right because its storage
algorithms extend existing Byzantine quorum protocols to handle changes in the replica set, and because it differs from
previous DHTs by providing Byzantine fault tolerance and offering strong semantics. We implemented the membership
service and dBQS. Our results show that the approach works well, in practice: the membership service is able to
manage a large system and the cost to change the system membership is low
The protection of processor-based systems to mitigate the harmful effect of transient faults (soft errors) is gaining
importance as technology shrinks. At the same time, for large segments of embedded markets, parameters like cost and
performance continue to be as important as reliability. This paper presents a compiler-based methodology for
Compiler-Directed Soft Error Mitigation for Embedded Systems
Automatic Reconfiguration for Large-Scale Reliable Storage Systems
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4210
EGC
4209
facilitating the design of fault-tolerant embedded systems. The methodology is supported by an infrastructure that
permits to easily combine hardware/software soft errors mitigation techniques in order to best satisfy both usual design
constraints and dependability requirements. It is based on a generic microprocessor architecture that facilitates the
implementation of software-based techniques, providing a uniform isolated-from-target hardening core that allows the
automatic generation of protected source code (hardened code). Two case studies are presented. In the first one,
several software-based mitigation techniques are implemented and evaluated showing the flexibility of the
infrastructure. In the second one, a customized fault tolerant embedded system is designed by combining selective
protection on both hardware and software. Several trade-offs among performance, code size, reliability, and hardware
costs have been explored. Results show the applicability of the approach. Among the developed software-based
mitigation techniques, a novel selective version of the well known SWIFT-R is presented.
Processor fault diagnosis has played an important role in measuring the reliability of a multiprocessor system, and the
diagnosability of many well-known multiprocessor systems has been widely investigated. The conditional diagnosability
is a novel measure of diagnosability by adding an additional condition that any faulty set cannot contain all the
neighbors of any node in a system. In this paper, we evaluate the conditional diagnosability for augmented cubes under
the PMC model. We show that the conditional diagnosability of an n-dimensional augmented cube is 8n - 27 for n≥5.
Malicious software typically resides stealthily on a user's computer and interacts with the user's computing resources.
Our goal in this work is to improve the trustworthiness of a host and its system data. Specifically, we provide a new
mechanism that ensures the correct origin or provenance of critical system information and prevents adversaries from
utilizing host resources. We define data-provenance integrity as the security property stating that the source where a
piece of data is generated cannot be spoofed or tampered with. We describe a cryptographic provenance verification
approach for ensuring system properties and system-data integrity at kernel-level. Its two concrete applications are
demonstrated in the keystroke integrity verification and malicious traffic detection. Specifically, we first design and
implement an efficient cryptographic protocol that enforces keystroke integrity by utilizing on-chip Trusted Computing
Platform (TPM). The protocol prevents the forgery of fake key events by malware under reasonable assumptions. Then,
we demonstrate our provenance verification approach by realizing a lightweight framework for restricting outbound
malware traffic. This traffic-monitoring framework helps identify network activities of stealthy malware, and lends itself
to a powerful personal firewall for examining all outbound traffic of a host that cannot be bypassed.
Conditional Diagnosability of Augmented Cubes under the PMC Model
Data-Provenance Verification For Secure Hosts
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4212
EGC
4211
EGC
4213
The multihop routing in wireless sensor networks (WSNs) offers little protection against identity deception through
replaying routing information. An adversary can exploit this defect to launch various harmful or even devastating
attacks against the routing protocols, including sinkhole attacks, wormhole attacks, and Sybil attacks. The situation is
further aggravated by mobile and harsh network conditions. Traditional cryptographic techniques or efforts at
developing trust-aware routing protocols do not effectively address this severe problem. To secure the WSNs against
adversaries misdirecting the multihop routing, we have designed and implemented TARF, a robust trust-aware routing
framework for dynamic WSNs. Without tight time synchronization or known geographic information, TARF provides
trustworthy and energy-efficient route. Most importantly, TARF proves effective against those harmful attacks developed
out of identity deception; the resilience of TARF is verified through extensive evaluation with both simulation and
empirical experiments on large-scale WSNs under various scenarios including mobile and RF-shielding network
conditions. Further, we have implemented a low-overhead TARF module in TinyOS; as demonstrated, this
implementation can be incorporated into existing routing protocols with the least effort. Based on TARF, we also
demonstrated a proof-of-concept mobile target detection application that functions well against an antidetection
mechanism.
The advent of emerging computing technologies such as service-oriented architecture and cloud computing has
enabled us to perform business services more efficiently and effectively. However, we still suffer from unintended
security leakages by unauthorized actions in business services. Firewalls are the most widely deployed security
mechanism to ensure the security of private networks in most businesses and institutions. The effectiveness of security
protection provided by a firewall mainly depends on the quality of policy configured in the firewall. Unfortunately,
designing and managing firewall policies are often error prone due to the complex nature of firewall configurations as
well as the lack of systematic analysis mechanisms and tools. In this paper, we represent an innovative policy anomaly
management framework for firewalls, adopting a rule-based segmentation technique to identify policy anomalies and
derive effective anomaly resolutions. In particular, we articulate a grid-based representation technique, providing an
intuitive cognitive sense about policy anomaly. We also discuss a proof-of-concept implementation of a visualization-
based firewall policy analysis tool called Firewall Anomaly Management Environment (FAME). In addition, we
demonstrate how efficiently our approach can discover and resolve anomalies in firewall policies through rigorous
experiments.
Collaborative information systems (CISs) are deployed within a diverse array of environments that manage sensitive
Detecting and Resolving Firewall Policy Anomalies
Design and Implementation of TARF:A Trust-Aware Routing Framework for WSNs
Detecting Anomalous Insiders in Collaborative Information Systems
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4214
EGC
4215
information. Current security mechanisms detect insider threats, but they are ill-suited to monitor systems in which
users function in dynamic teams. In this paper, we introduce the community anomaly detection system (CADS), an
unsupervised learning framework to detect insider threats based on the access logs of collaborative environments. The
framework is based on the observation that typical CIS users tend to form community structures based on the subjects
accessed (e.g., patients' records viewed by healthcare providers). CADS consists of two components: 1) relational
pattern extraction, which derives community structures and 2) anomaly prediction, which leverages a statistical model
to determine when users have sufficiently deviated from communities. We further extend CADS into MetaCADS to
account for the semantics of subjects (e.g., patients' diagnoses). To empirically evaluate the framework, we perform an
assessment with three months of access logs from a real electronic health record (EHR) system in a large medical
center. The results illustrate our models exhibit significant performance gains over state-of-the-art competitors. When
the number of illicit users is low, MetaCADS is the best model, but as the number grows, commonly accessed semantics
lead to hiding in a crowd, such that CADS is more prudent.
Compromised machines are one of the key security threats on the Internet; they are often used to launch various
security attacks such as spamming and spreading malware, DDoS, and identity theft. Given that spamming provides a
key economic incentive for attackers to recruit the large number of compromised machines, we focus on the detection
of the compromised machines in a network that are involved in the spamming activities, commonly known as spam
zombies. We develop an effective spam zombie detection system named SPOT by monitoring outgoing messages of a
network. SPOT is designed based on a powerful statistical tool called Sequential Probability Ratio Test, which has
bounded false positive and false negative error rates. In addition, we also evaluate the performance of the developed
SPOT system using a two-month e-mail trace collected in a large US campus network. Our evaluation studies show that
SPOT is an effective and efficient system in automatically detecting compromised machines in a network. For example,
among the 440 internal IP addresses observed in the e-mail trace, SPOT identifies 132 of them as being associated with
compromised machines. Out of the 132 IP addresses identified by SPOT, 126 can be either independently confirmed
(110) or highly likely (16) to be compromised. Moreover, only seven internal IP addresses associated with compromised
machines in the trace are missed by SPOT. In addition, we also compare the performance of SPOT with two other spam
zombie detection algorithms based on the number and percentage of spam messages originated or forwarded by
internal machines, respectively, and show that SPOT outperforms these two detection algorithms.
Internet services and applications have become an inextricable part of daily life, enabling communication and the
management of personal information from anywhere. To accommodate this increase in application and data complexity,
web services have moved to a multitiered design wherein the webserver runs the application front-end logic and data
are outsourced to a database or file server. In this paper, we present DoubleGuard, an IDS system that models the
network behavior of user sessions across both the front-end webserver and the back-end database. By monitoring both
Double Guard: Detecting Intrusions in Multitier Web Applications
Detecting Spam Zombies by Monitoring Outgoing Messages
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4217
EGC
4216
web and subsequent database requests, we are able to ferret out attacks that an independent IDS would not be able to
identify. Furthermore, we quantify the limitations of any multitier IDS in terms of training sessions and functionality
coverage. We implemented DoubleGuard using an Apache webserver with MySQL and lightweight virtualization. We
then collected and processed real-world traffic over a 15-day period of system deployment in both dynamic and static
web applications. Finally, using DoubleGuard, we were able to expose a wide range of attacks with 100 percent accuracy
while maintaining 0 percent false positives for static web services and 0.6 percent false positives for dynamic web
services.
Security risk assessment and mitigation are two vital processes that need to be executed to maintain a productive IT
infrastructure. On one hand, models such as attack graphs and attack trees have been proposed to assess the cause-
consequence relationships between various network states, while on the other hand, different decision problems have
been explored to identify the minimum-cost hardening measures. However, these risk models do not help reason about
the causal dependencies between network states. Further, the optimization formulations ignore the issue of resource
availability while analyzing a risk model. In this paper, we propose a risk management framework using Bayesian
networks that enable a system administrator to quantify the chances of network compromise at various levels. We show
how to use this information to develop a security mitigation and management plan. In contrast to other similar models,
this risk model lends itself to dynamic analysis during the deployed phase of the network. A multiobjective optimization
platform provides the administrator with all trade-off information required to make decisions in a resource constrained
environment.
Enforcing a practical Mandatory Access Control (MAC) in a commercial operating system to tackle malware problem is a
grand challenge but also a promising approach. The firmest barriers to apply MAC to defeat malware programs are the
incompatible and unusable problems in existing MAC systems. To address these issues, we manually analyze 2,600
malware samples one by one and two types of MAC enforced operating systems, and then design a novel MAC
enforcement approach, named Tracer, which incorporates intrusion detection and tracing in a commercial operating
system. The approach conceptually consists of three actions: detecting, tracing, and restricting suspected intruders.
One novelty is that it leverages light-weight intrusion detection and tracing techniques to automate security label
configuration that is widely acknowledged as a tough issue when applying a MAC system in practice. The other is that,
rather than restricting information flow as a traditional MAC does, it traces intruders and restricts only their critical
malware behaviors, where intruders represent processes and executables that are potential agents of a remote attacker.
Our prototyping and experiments on Windows show that Tracer can effectively defeat all malware samples tested via
blocking malware behaviors while not causing a significant compatibility problem.
Dynamic Security Risk Management Using Bayesian Attack Graphs
Enforcing Mandatory Access Control in Commodity OS to Disable Malware
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4218
EGC
4219
EGC
4220
Direct Anonymous Attestation (DAA) is a scheme that enables the remote authentication of a Trusted Platform Module
(TPM) while preserving the user's privacy. A TPM can prove to a remote party that it is a valid TPM without revealing its
identity and without linkability. In the DAA scheme, a TPM can be revoked only if the DAA private key in the hardware
has been extracted and published widely so that verifiers obtain the corrupted private key. If the unlinkability
requirement is relaxed, a TPM suspected of being compromised can be revoked even if the private key is not known.
However, with the full unlinkability requirement intact, if a TPM has been compromised but its private key has not been
distributed to verifiers, the TPM cannot be revoked. Furthermore, a TPM cannot be revoked from the issuer, if the TPM is
found to be compromised after the DAA issuing has occurred. In this paper, we present a new DAA scheme called
Enhanced Privacy ID (EPID) scheme that addresses the above limitations. While still providing unlinkability, our scheme
provides a method to revoke a TPM even if the TPM private key is unknown. This expanded revocation property makes
the scheme useful for other applications such as for driver's license. Our EPID scheme is efficient and provably secure
in the same security model as DAA, i.e., in the random oracle model under the strong RSA assumption and the
decisional Diffie-Hellman assumption.
Cloud computing enables highly scalable services to be easily consumed over the Internet on an as-needed basis. A
major feature of the cloud services is that users' data are usually processed remotely in unknown machines that users
do not own or operate. While enjoying the convenience brought by this new emerging technology, users' fears of losing
control of their own data (particularly, financial and health data) can become a significant barrier to the wide adoption of
cloud services. To address this problem, in this paper, we propose a novel highly decentralized information
accountability framework to keep track of the actual usage of the users' data in the cloud. In particular, we propose an
object-centered approach that enables enclosing our logging mechanism together with users' data and policies. We
leverage the JAR programmable capabilities to both create a dynamic and traveling object, and to ensure that any
access to users' data will trigger authentication and automated logging local to the JARs. To strengthen user's control,
we also provide distributed auditing mechanisms. We provide extensive experimental studies that demonstrate the
efficiency and effectiveness of the proposed approaches.
The attack graph is an abstraction that reveals the ways an attacker can leverage vulnerabilities in a network to violate a
security policy. When used with attack graph-based security metrics, the attack graph may be used to quantitatively
assess security-relevant aspects of a network. The Shortest Path metric, the Number of Paths metric, and the Mean of
Path Lengths metric are three attack graph-based security metrics that can extract security-relevant information.
However, one's usage of these metrics can lead to misleading results. The Shortest Path metric and the Mean of Path
Enhanced Privacy ID: A Direct Anonymous Attestation Scheme with Enhanced
Revocation Capabilities
Ensuring Distributed Accountability for Data Sharing in the Cloud
ES-MPICH2: A Message Passing Interface with Enhanced Security
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4222
EGC
4223
EGC
4221
Lengths metric fail to adequately account for the number of ways an attacker may violate a security policy. The Number
of Paths metric fails to adequately account for the attack effort associated with the attack paths. To overcome these
shortcomings, we propose a complimentary suite of attack graph-based security metrics and specify an algorithm for
combining the usage of these metrics. We present simulated results that suggest that our approach reaches a
conclusion about which of two attack graphs correspond to a network that is most secure in many instances.
In The attack graph is an abstraction that reveals the ways an attacker can leverage vulnerabilities in a network to
violate a security policy. When used with attack graph-based security metrics, the attack graph may be used to
quantitatively assess security-relevant aspects of a network. The Shortest Path metric, the Number of Paths metric, and
the Mean of Path Lengths metric are three attack graph-based security metrics that can extract security-relevant
information. However, one's usage of these metrics can lead to misleading results. The Shortest Path metric and the
Mean of Path Lengths metric fail to adequately account for the number of ways an attacker may violate a security policy.
The Number of Paths metric fails to adequately account for the attack effort associated with the attack paths. To
overcome these shortcomings, we propose a complimentary suite of attack graph-based security metrics and specify an
algorithm for combining the usage of these metrics. We present simulated results that suggest that our approach
reaches a conclusion about which of two attack graphs correspond to a network that is most secure in many instances
In this paper we present two forwarding protocols for mobile wireless networks of selfish individuals. We assume that all
the nodes are selfish and show formally that both protocols are Nash equilibria, that is, no individual has an interest to
deviate. Extensive simulations with real traces show that our protocols introduce an extremely small overhead in terms
of delay, while the techniques we introduce to force faithful behavior have the positive side-effect to improve
performance by reducing the number of message considerably (more than 20%). We test our protocols also in the
presence of a natural variation of the notion of selfishness-nodes that are selfish with outsiders and faithful with people
from the same community. Even in this case, our protocols are shown to be very efficient in detecting possible
misbehavior.
Enforcing a practical Mandatory Access Control (MAC) in a commercial operating system to tackle malware problem is a
In this paper, we propose game-theoretic mechanisms to encourage truthful data sharing for distributed data mining.
One proposed mechanism uses the classic Vickrey-Clarke-Groves (VCG) mechanism, and the other relies on the
Give2Get: Forwarding in Social Mobile Wireless Networks of Selfish Individuals
Incentive Compatible Privacy-Preserving Distributed Classification
ES-MPICH2: A Message Passing Interface with Enhanced Security
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4224
EGC
4225
Shapley value. Neither relies on the ability to verify the data of the parties participating in the distributed data mining
protocol. Instead, we incentivize truth telling based solely on the data mining result. This is especially useful for
situations where privacy concerns prevent verification of the data. Under reasonable assumptions, we prove that these
mechanisms are incentive compatible for distributed data mining. In addition, through extensive experimentation, we
show that they are applicable in practice.
In this paper, we introduce the first application of the belief propagation algorithm in the design and evaluation of trust
and reputation management systems. We approach the reputation management problem as an inference problem and
describe it as computing marginal likelihood distributions from complicated global functions of many variables.
However, we observe that computing the marginal probability functions is computationally prohibitive for large-scale
reputation systems. Therefore, we propose to utilize the belief propagation algorithm to efficiently (in linear complexity)
compute these marginal probability distributions; resulting a fully iterative probabilistic and belief propagation-based
approach (referred to as BP-ITRM). BP-ITRM models the reputation system on a factor graph. By using a factor graph,
we obtain a qualitative representation of how the consumers (buyers) and service providers (sellers) are related on a
graphical structure. Further, by using such a factor graph, the global functions factor into products of simpler local
functions, each of which depends on a subset of the variables. Then, we compute the marginal probability distribution
functions of the variables representing the reputation values (of the service providers) by message passing between
nodes in the graph. We show that BP-ITRM is reliable in filtering out malicious/unreliable reports. We provide a detailed
evaluation of BP-ITRM via analysis and computer simulations. We prove that BP-ITRM iteratively reduces the error in the
reputation values of service providers due to the malicious raters with a high probability. Further, we observe that this
probability drops suddenly if a particular fraction of malicious raters is exceeded, which introduces a threshold property
to the scheme. Furthermore, comparison of BP-ITRM with some well-known and commonly used reputation
management techniques (e.g., Averaging Scheme, Bayesian Approach, and Cluster Filtering) indicates the superiority of
- he proposed scheme in terms of robustness against attacks (e.g., ballot stuffing, bad mouthing). Finally, BP-ITRM
introduces a linear complexity in the number of service providers and consumers, far exceeding the efficiency of other
schemes.
Web queries, credit card transactions, and medical records are examples of transaction data flowing in corporate data
stores, and often revealing associations between individuals and sensitive information. The serial release of these data
to partner institutions or data analysis centers in a nonaggregated form is a common situation. In this paper, we show
that correlations among sensitive values associated to the same individuals in different releases can be easily used to
violate users' privacy by adversaries observing multiple data releases, even if state-of-the-art privacy protection
Iterative Trust and Reputation Management Using Belief Propagation
JS-Reduce: Defending Your Data from Sequential Background Knowledge Attacks
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4226
EGC
4227
EGC
4228
techniques are applied. We show how the above sequential background knowledge can be actually obtained by an
adversary, and used to identify with high confidence the sensitive values of an individual. Our proposed defense
algorithm is based on Jensen-Shannon divergence; experiments show its superiority with respect to other applicable
solutions. To the best of our knowledge, this is the first work that systematically investigates the role of sequential
background knowledge in serial release of transaction data.
As increasing amounts of sensitive personal information is aggregated into data repositories, it has become important
to develop mechanisms for processing the data without revealing information about individual data instances. The
differential privacy model provides a framework for the development and theoretical analysis of such mechanisms. In
this paper, we propose an algorithm for learning a discriminatively trained multiclass Gaussian mixture model-based
classifier that preserves differential privacy using a large margin loss function with a perturbed regularization term. We
present a theoretical upper bound on the excess risk of the classifier introduced by the perturbation.
Wireless Sensor Network (WSN) nodes are often deployed in harsh environments where the possibility of permanent
and especially intermittent faults due to environmental hazards is significantly increased, while silicon aging effects are
also exacerbated. Thus, online and in-field testing is necessary to guarantee correctness of operation. At the same time,
online testing of processors integrated in WSN nodes has the requirement of minimum energy consumption, because
these devices operate on battery, cannot be connected to any external power supply, and the battery duration
determines the lifetime of the system. Software-Based Self-Test (SBST) has emerged as an effective strategy for online
testing of processors integrated in nonsafety critical applications. However, the notion of dependability includes not
only reliability but also availability. Thus, in order to encase both aspects we present a methodology for the optimization
of SBST routines from the energy perspective. The refined methodology presented in this paper is able to be effectively
applied in the case that the SBST routines are not initially available and need to be downloaded to the WSN nodes, as
well as the case that the SBST routines are available in a flash memory. The methodology is extended to maximize the
energy gains for WSN architectures offering clock gating or Dynamic Frequency Scaling features. Simulation results
show that energy savings at processor level are up to 36.5 percent, which depending on the characteristics of the WSN
system, can translate in several weeks of increased lifetime, especially if the routines need to be downloaded to the
WSN node.
Large Margin Gaussian Mixture Models with Differential Privacy
Mitigating Distributed Denial of Service Attacks in Multiparty Applications in the
Presence of Clock Drifts
Low Energy Online Self-Test of Embedded Processors in Dependable WSN Nodes
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4230
EGC
4229
A weak point in network-based applications is that they commonly open some known communication port(s), making
themselves targets for denial of service (DoS) attacks. Considering adversaries that can eavesdrop and launch directed
DoS attacks to the applications' open ports, solutions based on pseudo-random port-hopping have been suggested. As
port-hopping needs that the communicating parties hop in a synchronized manner, these solutions suggest
acknowledgment-based protocols between a client-server pair or assume the presence of synchronized clocks.
Acknowledgments, if lost, can cause a port to be open for a longer time and thus be vulnerable to DoS attacks; Time
servers for synchronizing clocks can become targets to DoS attack themselves. Here we study the case where the
communicating parties have clocks with rate drift, which is common in networking. We propose an algorithm, BigWheel,
for servers to communicate with multiple clients in a port-hopping manner, thus enabling support to multi-party
applications as well. The algorithm does not rely on the server having a fixed port open in the beginning, neither does it
require from the client to get a "first-contact" port from a third party. We also present an adaptive algorithm, HoPerAA,
for hopping in the presence of clock-drift, as well as the analysis and evaluation of the methods. The solutions are
simple, based on each client interacting with the server independently of the other clients, without the need of
acknowledgments or time server. Provided that one has an estimation of the time it takes for the adversary to detect that
a port is open and launch an attack, the method we propose doesnot make it possible to the eavesdropping adversary to
launch an attack directed to the application's open port(s).
Enforcing a practical Mandatory Access Control (MAC) in a commercial operating system to tackle malware problem is a
Detecting and preventing data leakage and data misuse poses a serious challenge for organizations, especially when
dealing with insiders with legitimate permissions to access the organization's systems and its critical data. In this paper,
we present a new concept, Misuseability Weight, for estimating the risk emanating from data exposed to insiders. This
concept focuses on assigning a score that represents the sensitivity level of the data exposed to the user and by that
predicts the ability of the user to maliciously exploit this data. Then, we propose a new measure, the M-score, which
assigns a misuseability weight to tabular data, discuss some of its properties, and demonstrate its usefulness in several
leakage scenarios. One of the main challenges in applying the M-score measure is in acquiring the required knowledge
from a domain expert. Therefore, we present and evaluate two approaches toward eliciting misuseability conceptions
from the domain expert.
In Silence suppression, an essential feature of speech communications over the Internet, saves bandwidth by disabling
voice packet transmissions when silence is detected. However, silence suppression enables an adversary to recover
talk patterns from packet timing. In this paper, we investigate privacy leakage through the silence suppression feature.
More specifically, we propose a new class of traffic analysis attacks to encrypted speech communications with the goal
of detecting speakers of encrypted speech communications. These attacks are based on packet timing information only
On Privacy of Encrypted Speech Communications
M-Score: A Misuseability Weight Measure
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4231
EGC
4233
EGC
4232
and the attacks can detect speakers of speech communications made with different codecs. We evaluate the proposed
attacks with extensive experiments over different type of networks including commercial anonymity networks and
campus networks. The experiments show that the proposed traffic analysis attacks can detect speakers of encrypted
speech communications with high accuracy based on traces of 15 minutes long on average.
Content distribution via network coding has received a lot of attention lately. However, direct application of network
coding may be insecure. In particular, attackers can inject "bogus” data to corrupt the content distribution process so as
to hinder the information dispersal or even deplete the network resource. Therefore, content verification is an important
and practical issue when network coding is employed. When random linear network coding is used, it is infeasible for
the source of the content to sign all the data, and hence, the traditional "hash-and-sign” methods are no longer
applicable. Recently, a new on-the-fly verification technique has been proposed by Krohn et al. (IEEE S&P '04), which
employs a classical homomorphic hash function. However, this technique is difficult to be applied to network coding
because of high computational and communication overhead. We explore this issue further by carefully analyzing
different types of overhead, and propose methods to help reducing both the computational and communication cost,
and provide provable security at the same time.
In 2011, Sun et al. [CHECK END OF SENTENCE] proposed a security architecture to ensure unconditional anonymity for
honest users and traceability of misbehaving users for network authorities in wireless mesh networks (WMNs). It strives
to resolve the conflicts between the anonymity and traceability objectives. In this paper, we attacked Sun et al. scheme's
traceability. Our analysis showed that trusted authority (TA) cannot trace the misbehavior client (CL) even if it double-
time deposits the same ticket.
The open nature of the wireless medium leaves it vulnerable to intentional interference attacks, typically referred to as
jamming. This intentional interference with wireless transmissions can be used as a launchpad for mounting Denial-of-
Service attacks on wireless networks. Typically, jamming has been addressed under an external threat model. However,
adversaries with internal knowledge of protocol specifications and network secrets can launch low-effort jamming
attacks that are difficult to detect and counter. In this work, we address the problem of selective jamming attacks in
wireless networks. In these attacks, the adversary is active only for a short period of time, selectively targeting
messages of high importance. We illustrate the advantages of selective jamming in terms of network performance
degradation and adversary effort by presenting two case studies; a selective attack on TCP and one on routing. We
On the Security and Efficiency of Content Distribution via Network Coding
Packet-Hiding Methods for Preventing Selective Jamming Attacks
On the Security of a Ticket-Based Anonymity System with Traceability Property in
Wireless Mesh Networks
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4236
EGC
4234
EGC
4235
show that selective jamming attacks can be launched by performing real-time packet classification at the physical layer.
To mitigate these attacks, we develop three schemes that prevent real-time packet classification by combining
cryptographic primitives with physical-layer attributes. We analyze the security of our methods and evaluate their
computational and communication overhead.
Computational Private Information Retrieval (cPIR) protocols allow a client to retrieve one bit from a database, without
the server inferring any information about the queried bit. These protocols are too costly in practice because they invoke
complex arithmetic operations for every bit of the database. In this paper, we present pCloud, a distributed system that
constitutes the first attempt toward practical cPIR. Our approach assumes a disk-based architecture that retrieves one
page with a single query. Using a striping technique, we distribute the database to a number of cooperative peers, and
leverage their computational resources to process cPIR queries in parallel. We implemented pCloud on the PlanetLab
network, and experimented extensively with several system parameters. Our results indicate that pCloud reduces
considerably the query response time compared to the traditional client/server model, and has a very low
communication overhead. Additionally, it scales well with an increasing number of peers, achieving a linear speedup.
Enforcing a practical Mandatory Access Control (MAC) in a commercial operating system to tackle malware problem is a
This paper presents an integrated evaluation of the Persuasive Cued Click-Points graphical password scheme, including
usability and security evaluations, and implementation considerations. An important usability goal for knowledge-based
authentication systems is to support users in selecting passwords of higher security, in the sense of being from an
expanded effective security space. We use persuasion to influence user choice in click-based graphical passwords,
encouraging users to select more random, and hence more difficult to guess, click-points.
In Consensus is one of the key problems in fault-tolerant distributed computing. Although the solvability of consensus
is now a well-understood problem, comparing different algorithms in terms of efficiency is still an open problem. In this
paper, we address this question for round-based consensus algorithms using communication predicates, on top of a
partial synchronous system that alternates between good and bad periods (synchronous and nonsynchronous periods).
Communication predicates together with the detailed timing information of the underlying partially synchronous system
provide a convenient and powerful framework for comparing different consensus algorithms and their implementations.
This approach allows us to quantify the required length of a good period to solve a given number of consensus
instances. With our results, we can observe several interesting issues, such as the number of rounds of an algorithm is
not necessarily a good metric for its performance
pCloud: A Distributed System for Practical PIR
Quantitative Analysis of Consensus Algorithms
Persuasive Cued Click-Points: Design, Implementation, and Evaluation of a
Knowledge-Based Authentication Mechanism
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4237
EGC
4238
EGC
4239
Major online platforms such as Facebook, Google, and Twitter allow third-party applications such as games, and
productivity applications access to user online private data. Such accesses must be authorized by users at installation
time. The Open Authorization protocol (OAuth) was introduced as a secure and efficient method for authorizing third-
party applications without releasing a user's access credentials. However, OAuth implementations don't provide the
necessary fine-grained access control, nor any recommendations, i.e., which access control decisions are most
appropriate. We propose an extension to the OAuth 2.0 authorization that enables the provisioning of fine-grained
authorization recommendations to users when granting permissions to third-party applications. We propose a
multicriteria recommendation model that utilizes application-based, user-based, and category-based collaborative
filtering mechanisms. Our collaborative filtering mechanisms are based on previous user decisions, and application
permission requests to enhance the privacy of the overall site's user population. We implemented our proposed OAuth
extension as a browser extension that allows users to easily configure their privacy settings at application installation
time, provides recommendations on requested privacy permissions, and collects data regarding user decisions. Our
experiments on the collected data indicate that the proposed framework efficiently enhanced the user awareness and
privacy related to third-party application authorizations.
We propose and implement an innovative remote attestation framework called DR@FT for efficiently measuring a target
system based on an information flow-based integrity model. With this model, the high integrity processes of a system
are first measured and verified, and these processes are then protected from accesses initiated by low integrity
processes. Toward dynamic systems with frequently changed system states, our framework verifies the latest state
changes of a target system instead of considering the entire system information. Our attestation evaluation adopts a
graph-based method to represent integrity violations, and the graph-based policy analysis is further augmented with a
ranked violation graph to support high semantic reasoning of attestation results. As a result, DR@FT provides efficient
and effective attestation of a system's integrity status, and offers intuitive reasoning of attestation results for security
administrators. Our experimental results demonstrate the feasibility and practicality of DR@FT.
Modern computer systems are built on a foundation of software components from a variety of vendors. While critical
applications may undergo extensive testing and evaluation procedures, the heterogeneity of software sources threatens
the integrity of the execution environment for these trusted programs. For instance, if an attacker can combine an
application exploit with a privilege escalation vulnerability, the operating system (OS) can become corrupted.
Recommendation Models for Open Authorization
Remote Attestation with Domain-Based Integrity Model and Policy Analysis
Resilient Authenticated Execution of Critical Applications in Untrusted Environments
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4240
EGC
4241
Alternatively, a malicious or faulty device driver running with kernel privileges could threaten the application. While the
importance of ensuring application integrity has been studied in prior work, proposed solutions immediately terminate
the application once corruption is detected. Although, this approach is sufficient for some cases, it is undesirable for
many critical applications. In order to overcome this shortcoming, we have explored techniques for leveraging a trusted
virtual machine monitor (VMM) to observe the application and potentially repair damage that occurs. In this paper, we
describe our system design, which leverages efficient coding and authentication schemes, and we present the details of
our prototype implementation to quantify the overhead of our approach. Our work shows that it is feasible to build a
resilient execution environment, even in the presence of a corrupted OS kernel, with a reasonable amount of storage and
performance overhead.
Brute force and dictionary attacks on password-only remote login services are now widespread and ever increasing.
Enabling convenient login for legitimate users while preventing such attacks is a difficult problem. Automated Turing
Tests (ATTs) continue to be an effective, easy-to-deploy approach to identify automated malicious login attempts with
reasonable cost of inconvenience to users. In this paper, we discuss the inadequacy of existing and proposed login
protocols designed to address large-scale online dictionary attacks (e.g., from a botnet of hundreds of thousands of
nodes). We propose a new Password Guessing Resistant Protocol (PGRP), derived upon revisiting prior proposals
designed to restrict such attacks. While PGRP limits the total number of login attempts from unknown remote hosts to
as low as a single attempt per username, legitimate users in most cases (e.g., when attempts are made from known,
frequently-used machines) can make several failed login attempts before being challenged with an ATT. We analyze the
performance of PGRP with two real-world data sets and find it more promising than existing proposals.
Mobile Ad hoc Networks (MANET) have been highly vulnerable to attacks due to the dynamic nature of its network
infrastructure. Among these attacks, routing attacks have received considerable attention since it could cause the most
devastating damage to MANET. Even though there exist several intrusion response techniques to mitigate such critical
attacks, existing solutions typically attempt to isolate malicious nodes based on binary or naive fuzzy response
decisions. However, binary responses may result in the unexpected network partition, causing additional damages to
the network infrastructure, and naive fuzzy responses could lead to uncertainty in countering routing attacks in MANET.
In this paper, we propose a risk-aware response mechanism to systematically cope with the identified routing attacks.
Our risk-aware approach is based on an extended Dempster-Shafer mathematical theory of evidence introducing a
notion of importance factors. In addition, our experiments demonstrate the effectiveness of our approach with the
consideration of several performance metrics.
Risk-Aware Mitigation for MANET Routing Attacks
Revisiting Defenses against Large-Scale Online Password Guessing Attacks
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4243
EGC
4242
Enforcing a practical Mandatory Access Control (MAC) in a commercial operating system to tackle malware problem is a
We present a modular redesign of TrustedPals, a smart card-based security framework for solving Secure Multiparty
Computation (SMC). Originally, TrustedPals assumed a synchronous network setting and allowed to reduce SMC to the
problem of fault-tolerant consensus among smart cards. We explore how to make TrustedPals applicable in
environments with less synchrony and show how it can be used to solve asynchronous SMC. Within the redesign we
investigate the problem of solving consensus in a general omission failure model augmented with failure detectors. To
this end, we give novel definitions of both consensus and the class oP of failure detectors in the omission model, which
we call ◇P(om), and show how to implement ◇P(om) and have consensus in such a system with very weak synchrony
assumptions. The integration of failure detection and consensus into the TrustedPals framework uses tools from privacy
enhancing techniques such as message padding and dummy traffic.
Security and privacy issues have become critically important with the fast expansion of multiagent systems. Most
network applications such as pervasive computing, grid computing, and P2P networks can be viewed as multiagent
systems which are open, anonymous, and dynamic in nature. Such characteristics of multiagent systems introduce
vulnerabilities and threats to providing secured communication. One feasible way to minimize the threats is to evaluate
the trust and reputation of the interacting agents. Many trust/reputation models have done so, but they fail to properly
evaluate trust when malicious agents start to behave in an unpredictable way. Moreover, these models are ineffective in
providing quick response to a malicious agent's oscillating behavior. Another aspect of multiagent systems which is
becoming critical for sustaining good service quality is the even distribution of workload among service providing
agents. Most trust/reputation models have not yet addressed this issue. So, to cope with the strategically altering
behavior of malicious agents and to distribute workload as evenly as possible among service providers; we present in
this paper a dynamic trust computation model called "SecuredTrust.” In this paper, we first analyze the different factors
related to evaluating the trust of an agent and then propose a comprehensive quantitative model for measuring such
trust. We also propose a novel load-balancing algorithm based on the different factors defined in our model. Simulation
results indicate that our model compared to other existing models can effectively cope with strategic behavioral change
of malicious agents and at the same time efficiently distribute workload among the service providing agents under
stable condition.
Secured Trust: A Dynamic Trust Computation Model for Secured Communication in Multi
agent Systems
Secure Failure Detection and Consensus in Trusted Pals
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4244
EGC
4245
EGC
4246
Recently, Bertino, Shang and Wagstaff proposed a time-bound hierarchical key management scheme for secure
broadcasting. Their scheme is built on elliptic curve cryptography and implemented with tamper-resistant devices. In
this paper, we present two collusion attacks on Bertino-Shang-Wagstaff scheme. The first attack does not need to
compromise any decryption device, while the second attack requires to compromise single decryption device only. Both
attacks are feasible and effective.
In this work, we suggest hardware and software components that enable the creation of a self-stabilizing os/vmm on top
of an off-the-shelf, nonself-stabilizing processor. A simple "watchdog” hardware that is called a periodic reset monitor
(prm) provides a basic solution. The solution is extended to stabilization enabling hardware (seh) which removes any
real time requirement from the os/vmm. A stabilization enabling system that extends the seh with software components
provides the user (an os/vmm designer) with a self-stabilizing processor abstraction. The method uses only a modest
addition of hardware, which is external to the microprocessor. We demonstrate our approach on the XScale core by
Intel. Moreover, we suggest methods for the adaptation of existing system code (e.g., code for operating systems) to be
self-stabilizing. One method allows capturing and enforcing the configuration used by the program, thus reducing the
work of the self-stabilizing algorithm designer to considering only the dynamic (nonconfigurational) parts of the state.
Another method is suggested for ensuring that, eventually, addresses of branch commands are examined using a sanity
check segment. This method is then used to ensure that a sanity check is performed before critical operations. One
application of the latter method is for enforcing a full separation of components in the system.
Radio Frequency Identification (RFID) has been developed as an important technique for many high security and high
integrity settings. In this paper, we study survivability issues for RFID. We first present an RFID survivability experiment
to define a foundation to measure the degree of survivability of an RFID system under varying attacks. Then we model a
series of malicious scenarios using stochastic process algebras and study the different effects of those attacks on the
ability of the RFID system to provide critical services even when parts of the system have been damaged. Our simulation
model relates its statistic to the attack strategies and security recovery. The model helps system designers and security
specialists to identify the most devastating attacks given the attacker's capacities and the system's recovery abilities.
The goal is to improve the system survivability given possible attacks. Our model is the first of its kind to formally
represent and simulate attacks on RFID systems and to quantitatively measure the degree of survivability of an RFID
system under those attacks.
Security of Bertino-Shang-Wagstaff Time-Bound Hierarchical Key Management Scheme
for Secure Broadcasting
Stabilization Enabling Technology
Survivability Experiment and Attack Characterization for RFID
Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai http://www.elysiumtechnologies.com, [email protected]
IEEE Final Year Projects 2012 |Student Projects | Network Security Projects
EGC
4247
Due to the unattended nature of wireless sensor networks, an adversary can physically capture and compromise sensor
nodes and then mount a variety of attacks with the compromised nodes. To minimize the damage incurred by the
compromised nodes, the system should detect and revoke them as soon as possible. To meet this need, researchers
have recently proposed a variety of node compromise detection schemes in wireless ad hoc and sensor networks. For
example, reputation-based trust management schemes identify malicious nodes but do not revoke them due to the risk
of false positives. Similarly, software-attestation schemes detect the subverted software modules of compromised
nodes. However, they require each sensor node to be attested periodically, thus incurring substantial overhead. To
mitigate the limitations of the existing schemes, we propose a zone-based node compromise detection and revocation
scheme in wireless sensor networks. The main idea behind our scheme is to use sequential hypothesis testing to detect
suspect regions in which compromised nodes are likely placed. In these suspect regions, the network operator performs
software attestation against sensor nodes, leading to the detection and revocation of the compromised nodes. Through
quantitative analysis and simulation experiments, we show that the proposed scheme detects the compromised nodes
with a small number of samples while reducing false positive and negative rates, even if a substantial fraction of the
nodes in the zone are compromised. Additionally, we model the detection problem using a game theoretic analysis,
derive the optimal strategies for the attacker and the defender, and show that the attacker's gain from node compromise
is greatly limited by the defender when both the attacker and the defender follow their optimal strategies.
Zone Trust: Fast Zone-Based Node Compromise Detection and Revocation in Wireless
Sensor Networks Using Sequential Hypothesis Testing